Windows Azure Monitoring Status

Virtual Machines - USGov Virginia - Investigating

Starting at 07:28 EST on 27 Feb 2019 a subset of customers have been identified who may experience degraded performance or time outs when accessing Azure resources.Engineers have been engaged and are actively investigating the issue. Impacted services will be listed on the Azure status page as they are identified.The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 21 days ago

Virtual Machines - Investigating

Summary of impact:Starting at approximately 04:40 UTC on 22 Feb 2019 a subset of customers using Virtual Machines (Classic) may experience failures/ high latency when attempting to complete service management operations.Retries may be successful for some customers.Current Status: Engineers are engaged and are actively investigating the issue.The next update will be provided in 60 minutes or as events warrant.

Last Update: About 26 days ago

Virtual Machines - Investigating

Summary of impact:Starting at approximately 04:40 UTC on 22 Feb 2019 a subset of customers using Virtual Machines (Classic) may experience failures/ high latency when attempting to complete service management operations.Retries may be successful for some customers.Current Status: Engineers are engaged and are actively investigating the issue.The next update will be provided in 60 minutes or as events warrant.

Last Update: About 26 days ago

SQL DB - West Europe - Investigating

Starting at 09:40 UTC on 20 Feb 2019 a subset of customers using SQL Database in West Europe may experience issues performing service management operations. Server and database create, drop, and scale operations may result in a "deployment failed" error or timeout. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 28 days ago

Azure Kubernetes Service - East US - Investigating

Engineers have identified an issue impacting AKS in East US causing create, update, delete, and scale operations to fail. Customers with existing clusters hosted in East US may experience brief cluster downtime or be unable to modify clusters or perform other workload operations. Impacted clusters will continue to run in their last known state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 1 month ago

Azure IoT Hub

SUMMARY OF IMPACT: Between 08:16 and 14:12 UTC on 08 Feb 2019, a subset of customers using Azure IoT Hub may have received failure notifications or high latency when performing service management operations via the Azure Portal or other programmatic methods. PRELIMINARY ROOT CAUSE: Engineers determined that a SQL database had reached high resource utilization which intermittently affected service management requests.MITIGATION: Engineers made a configuration change to compute nodes and scaled up the database which mitigated the issue.NEXT STEPS: Next steps: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. To stay informed on any issues, maintenance events, or advisories, create service health alerts (https://www.aka.ms/ash-alerts) and you will be notified via your preferred communication channel(s): email, SMS, webhook, etc.

Last Update: About 1 month ago

Azure IoT Hub - Investigating

Starting at approximately 08:30 UTC on 08 Feb 2019 a subset of customers using IoT Hub \ IoT Hub Device Provisioning Service may receive failure notifications or high latency when performing service management operations - such as create, update, delete - via the Azure Portal. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 1 month ago

Azure IoT Hub - Investigating

Starting at approximately 08:30 UTC on 08 Feb 2019 a subset of customers using IoT Hub \ IoT Hub Device Provisioning Service may receive failure notifications or high latency when performing service management operations - such as create, update, delete - via the Azure Portal. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 1 month ago

App Service - West US - Investigating

Starting at 00:45 UTC on 30 Jan 2019 a subset of customers using App Service in West US may receive intermittent HTTP 500-level response codes, experience timeouts or high latency when accessing App Service deployments hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 1 month ago

App Service - West US - Investigating

Starting at 00:45 UTC on 30 Jan 2019 a subset of customers using App Service in West US may receive intermittent HTTP 500-level response codes, experience timeouts or high latency when accessing App Service deployments hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 1 month ago

Network Infrastructure Event

Starting at 21:00 UTC on 29 Jan 2019, customers may experience issues accessing Microsoft Cloud resources. Engineers are pursuing multiple workstreams to isolate the root cause and mitigate impact. The next update will be provided in 60 minutes or as events warrant.

Last Update: About 1 month ago

Information regarding Multi-Factor Authentication

We've identified an issue affecting Multi-Factor Authentication that is causing users to experience latency or timeouts when attempting to leverage MFA via phone call. If you currently utilize phone-only authentication, other Azure MFA authentication methods, such as the Authenticator application and SMS, are not affected and should be successful. Preliminary root cause has been identified and engineers are currently exploring durablemitigation options.

Last Update: A few months ago

Information regarding Multi-Factor Authentication

We've identified an issue affecting a small subset of users utilizing Multi-Factor Authentication (MFA) that is causing users to experience latency or timeouts when attempting to leverage MFA via phone call. Other Azure MFA authentication methods, including the Authenticator application and SMS, are not affected.

Last Update: A few months ago

Storage - UK South - Investigating

Starting at 13:19 UTC on 10 Jan 2019, a subset of customers leveraging Storage in UK South may experience service availability issues. In addition, resources with dependencies on Storage, may also experience downstream impact in the form of availability issues. Engineers have been engaged and are actively investigating.The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South Investigating

Engineers are investigating a potential service alert for Storage in the UK South region with potential downstream impact to dependent services in this region. More information will be provided as it becomes available.

Last Update: A few months ago

Azure DevTest Labs - Investigating

Starting at 16:30 UTC on 08 Jan 2019 a subset of customers using Azure DevTest Labs may experience error messages and difficulties connecting to Virtual Machines within the service. Impacted customers may also experience issues connecting to configurations. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant. Customers experiencing error messages related to their Virtual Machines can learn more about this issue here: https://aka.ms/azuredevtestblog

Last Update: A few months ago

Service Management Operation Failures - North Europe - Applying Mitigation

Starting at approximately 08:00 UTC on 05 Jan 2019, a subset of customers in North Europe may receive failure notifications when performing service management operations on resources hosted in this region. Engineers have identified a potential underlying cause and are currently applying mitigation. Some customers may start seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Networking - North Europe - Investigating

Starting at 05:17 UTC on 05 Jan 2019 a subset of customers in North Europe may receive failure notifications when performing network-related service management operations for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US - Investigating

Starting at 09.30 UTC on 04 Jan 2019 a subset of customers using Log Analytics in East US may receive intermittent failure notifications and/or latency when attempting to ingest and/or access data. Tiles and blades may also fail to load and display data. Customers may experience issues creating queries, log alerts or metric alerts on logs. Additionally, customers may also experience false positive alerts or missed alerts. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US - Investigating

We are currently investigating an issue impacting a subset of Log Analytics customers in East US. More details will be shared shortly.

Last Update: A few months ago

Networking - UK West - Investigating

Starting at 09:00 UTC on 14 Dec 2018 a subset of customers in UK West may intermittently experience degraded performance (latency, network drops or time outs) when accessing Azure resources hosted in this region. Customers with resources in UK West attempting to connect to other resources outside of this region may also experience similar symptoms. Engineers have determined that this is caused by an underlying Networking event in this region which is currently under investigation. The next update will be provided in 60 minutes or, as events warrant.

Last Update: A few months ago

Azure Analysis Services - West Central US - Investigating

Starting at 21:55 UTC on 12 Dec 2018 customers using Azure Analysis Services in West Central US may experience issues accessing existing servers, provisioning new servers, resuming new servers, or performing SKU changes for active servers. Engineers have identified a possible root cause for this issue and are investigating mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network - South Central US

Starting at approximately 03:30 UTC on 04 Dec 2018, customers in South Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Data Lake Store - North Europe - Investigating

Starting at 09:30 UTC on 29 Nov 2018 a subset of customers using Data Lake Store in North Europe may experience difficulties accessing resources hosted in this region. In addition, data ingress or egress operations may timeout or fail for these customers. Customers of Data Lake Analytics with dependencies on Data Lake Store in this region may also experience connectivityissues related to this. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US 2 - Investigating

Starting at 04:20 UTC on 28 Nov 2018 a subset of customers in West US 2 may experience issues connecting to Storage resources hosted in this region. Customers using resources dependent on Storage may also see impact. Engineers are aware of this issue and are actively investigating. The next update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Azure SQL Connectivity and Service Management Errors - West US 2

Engineers are investigating a storage outage related to the following services in West US 2.

Last Update: A few months ago

Multi-Factor Authentication - Investigating

At approximately 14:25 UTC Nov 27, engineers were alerted to a potential outage impacting Multi-Factor Authentication. Impacted customers may experience failures when attempting to authenticate into Azure resources where MFA is required by policy. Engineers are investigating the issue and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Network - UK South/UK West - Investigating

Starting at 11:19 UTC on 26 Nov 2018 a subset of customers using Virtual Network in UK South and UK West may experience intermittent difficulties connecting to resources hosted in these regions. Some customers using Global VNET peering or replication between these regions may experience latency or connectivity issues. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Signing into Azure Resources with Multi-Factor Authentication

Starting at approximately 04:39 UTC on 19 Nov 2018, a subset of customers in Europe and Asia-Pacific regions may experience difficulties signing into Azure resources, such as Azure Active Directory, when Multi-Factor Authentication is required by policy. Engineers are aware of this issue and are actively investigating potential mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Signing into Azure Resources with Multi-factor Authentication

Starting at approximately 04:39 UTC on 19 Nov 2018, a subset of customers using Multi-Factor Authentication may experience difficulties signing into Azure resources, such as Azure Active Directory, when Multi-Factor Authentication is required by policy. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service and Function Apps - West Europe

Starting at 11:10 UTC on 13 Nov 2018 a subset of customers using App Service and/or Function Apps in West Europe who may receive intermittent HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in these regions. Impacted customers may also see issues with their Azure App Service Scaling settings. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Portal Timeouts and Latency - Investigating

Starting approximately at 23:05 UTC on 03 Nov 2018, some customers may have experienced high latency or timeouts when viewing resources or loading blades through the Azure Portal (https://portal.azure.com). Engineers have identified the potential root cause and in the process of applying mitigation. Some customers have reported seeing service recovered.

Last Update: A few months ago

Storage - West US 2 - Investigating

An issue with Storage in West Us 2 is currently being investigated, more information will be provided as it is known.

Last Update: A few months ago

Investigating Network Issues in West US

Starting at approximately 22:40 UTC on 24 Oct 2018 a subset of customers in West US may intermittently experience degraded network performance.Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation.Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Machine Learning Studio Web Service

Starting at 09:00 UTC on 24 Oct 2018 you have been identified as a customer using Machine Learning Studio in multiple regions who may receive failure notifications such as 403 errorswhen performing batch execution operations, re-publishing the batch operation may be successful.Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Service Availability - France Central

Starting at 13:57 UTC on 16 Oct 2018 customers using a subset of resources in France Central may experience difficulties connecting to these resources. Engineers have identified a localized infrastructure event caused a number of storage and virtual machines to experience drops in availability. Service teams have begun restoring impacted storage and virtual machine resources to mitigate. The next update will be provided in 30 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - France Central

Engineers are investigating alerts for Storage and Virtual Machines in France Central. More updates will be provided shortly

Last Update: A few months ago

Azure Service Availability

An investigation for Azure services in France Central is underway. More information will be provided as it becomes available.

Last Update: A few months ago

Storage - East US - Advisory

Engineers are investigating alerts for Storage in East US. Additional information will be provided shortly.

Last Update: A few months ago

Azure Active Directory and Azure Portal

An alert has fired for Azure Active directory and Azure Portal. Engineers are currently investigating. More information to follow.

Last Update: A few months ago

Azure DevOps - Non Regional

Starting at 11:40 UTC on 08 Oct 2018 a subset of customers using Azure DevOps may experience difficulties connecting to resources and some customers may experience difficulties signing in totheir Portal. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure DevOps - South Central US

Engineers are currently investigating errors when using Azure DevOps in South Central US region. More information can be found here: https://aka.ms/vstsblog

Last Update: A few months ago

Azure Dev Ops - South Central US

Engineers are currently investigating errors when using Azure Dev Ops in South Central US region.

Last Update: A few months ago

Azure DevOps - Investigating

Starting at 16:45 UTC on 03 Oct 2018 a subset of customers using Azure DevOps may be unable to access DevOps resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant. For more information, please refer to https://aka.ms/vstsblog

Last Update: A few months ago

Azure DevOps - Investigating

Starting at 16:58 UTC on 03 Oct 2018 a subset of customers using Azure DevOps may be unable to access DevOps resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant. For more information, please refer to https://aka.ms/vstsblog

Last Update: A few months ago

Azure DevOps - Investigating

Starting at 17:15 UTC on 03 Oct 2018 a subset of customers using Azure DevOps may be unable to access DevOps resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant. For more information, please refer to https://aka.ms/vstsblog

Last Update: A few months ago

Azure DevOps - Investigating

Starting at 10:15 UTC on 03 Oct 2018 a subset of customers using Azure DevOps may be unable to access DevOps resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant. For more information, please refer to https://aka.ms/vstsblog

Last Update: A few months ago

Multiple Services - Korea South

Starting at 13:50 UTC on 30 Sep 2018, a subset of Storage customers in Korea South may experience difficulties connecting to resources hosted in this region. A number of services with dependencies on Storage in the region are also experiencing impact, and these are listed below. Engineers have identified the underlying cause, and are currently exploring mitigation options. Some customers may already be seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Korea South

Starting at 13:50 UTC on 30 Sep 2018, a subset of Storage customers in Korea South may experience difficulties connecting to resources hosted in this region. A number of services with dependencies on Storage in the region are also experiencing impact, and these are listed below. Engineers have identified the underlying cause, and are currently exploring mitigation options. Some customers may already be seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Korea South

Starting at 13:50 UTC on 30 Sep 2018, a subset of customers in Korea South may experience difficulties connecting to resources hosted in this region. Impacted services are listed below. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines and Cloud Services - South East Asia - Investigating

Starting at 09:52 UTC on 26 Sep 2018 a subset of customers in Southeast Asia may experience latency or difficulties connecting to Virtual Machines and/or Cloud Service resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US - Investigating

Starting at approximately 10:54 UTC on 19 Sep 2018 a subset of customers using Log Analytics in East US may experience difficulties connecting to resources hosted in this region. As a result, customers may experience high latency and timeouts when getting workspace information, running Log Analytics queries, and other operations related to Log Analytics workspaces. This may lead to Service Map ingestion delays and latency. Customers utilizing Automation may experience difficulties accessing the Update management, Change tracking, Inventory, and Linked workspace blades. Customers utilizing Network Performance Monitor may also see calls being aborted. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - Metrics Unavailable

Starting at 06:30 UTC on 15 Sep 2018 a subset of customers may experience difficulties viewing Virtual Machines and/or additional compute resource metrics. Metric graphs including CPU (average), Network (total), Disk Bytes (total) and Disk operations/sec (average) may appear empty within the Management Portal. Auto-scaling operations may be impacted as well. Engineers have begun applying a potential fix. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Networking - South Central US - Investigating

Starting at 09:29 UTC on 04 Sep 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Iot Hub - West Europe and North Europe - Investigating

Starting at 17:30 UTC on 20 Aug 2018, a subset of IoT Hub customers in West Europe and North Europe may experience errors, such as 'InternalServerError', when trying to access IoT Hub devices hosted in these regions. Customers may also experience issues when attempting to delete resources/devices. Engineers have engaged additional teams to assist with identifying the preliminary root cause and to determine a mitigation path. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Iot Hub - West Europe and North Europe - Investigating

Starting at 17:30 UTC on 20 Aug 2018, a subset of IoT Hub customers in West Europe and North Europe may experience errors, such as 'InternalServerError', when trying to access IoT Hub devices hosted in these regions. Customers may also experience issues when attempting to delete resources/devices. Engineers are continuing to investigate the underlying cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 08:30 UTC on 20 Aug 2018 you have been identified as a customer using Storage in West Europe who may receive failure notifications when performing create operations for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US - Investigating

Starting at 13:18 UTC on 01 Aug 2018 a subset of customers using Virtual Machines in East US may experience difficulties connecting to some resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Central US

Engineers are investigation a potential outage in North Central US. More information will be provided shortly.

Last Update: A few months ago

App Services - East US - Investigating

Starting at 08:00 UTC on 28 Jul 2018 a subset of customers using App Service in East US may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Service Management Failures - Resolved

SUMMARY OF IMPACT: Between 22:15 on 26 Jun 2018 and 06:20 UTC on 27 Jun 2018, a subset of customers may have experienced timeouts or failures when attempting to perform service management operations on their API Management, App Service, Microsoft Stream, Media Services, Azure SQL Database, Azure Search, Azure Active Directory B2C, Azure IoT hubs, Azure Batch, Event Hubs and Service Bus services in Azure. In addition, some customers may have experienced connection failures to the Azure Portal. Some services with a reliance on triggers from service management calls may have seen failures for running instances.PRELIMINARY ROOT CAUSE: Engineers identified a service management API code configuration that impacted background services. This was causing service management requests to fail for a subset of customers.MITIGATION: Engineers performed a rollback of the recent deployment task to mitigate the issue.NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

App Service - Service Management Operations

Starting at 01:32 UTC on 27 Jul 2018, a subset of customers using App Service may receive failure notifications when performing service management operations - such as create, update, delete - for their resources. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Activity Logs & Alerts

Starting at 23:00 UTC on 24 Jul 2018 customers may be unable to view their Activity Logs. Customers in East US, East US 2 and UK North are not impacted. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Error notifications in Microsoft Azure Portal

Starting at approximately 20:00 UTC on 23 Jul 2018, a subset of customers may receive timeout errors or failure notifications when attempting to view service blades in the Microsoft Azure Portal. Customers may also experience slowness or difficulties logging into the Portal. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Accessing Resources

Engineers are currently investigating an outage impacting Storage dependent resources in West US, West Central US and Central India. More information will provided as events warrant.

Last Update: A few months ago

Storage - Multiple Regions

Engineers are currently investigating an outage impacting Storage dependent resources in West US, West Central US and Central India. More information will provided as events warrant.

Last Update: A few months ago

Virtual Machines - UK South - Investigating

An alert for Virtual Machines in UK South is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Intermittent login failures for Azure SQL - South Central US

Starting at 18:38 UTC on 11 Jul 2018, a subset of customers using SQL Database in South Central US may experience intermittent issues when logging into databases in this region. Availability (connecting to and using existing databases) is not impacted and login retries may be successful. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Intermittent login failures for Azure SQL - South Central US

An investigation for SQL Database in South Central US is underway. More information will be provided as events warrant.

Last Update: A few months ago

Iot Hub - Multiple Regions - Investigating

Starting at approximately 01:30 UTC on July 1 2018, a subset of customers leveraging IoT hub in North Europe, West Europe, West US and East US may experience latency, or connectivity issues, when accessing their resources in these regions. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as event warrant.

Last Update: A few months ago

App Service - West Europe - Investigating

Starting at 16:00 UTC on 27 Jun 2018 engineers identified a subset of customers using App Service in West Europe who may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in these regions. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage/Virtual Machines - South Central US

Engineers are currently investigating alerts in South Central US for Storage, Virtual Machines and App Services. More information will be provided as soon as it is available.

Last Update: A few months ago

Azure Data Factory - Multiple Regions

Engineers are investigating alerts for Data Factory in Multiple Regions. More information will be provided as it is known.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:44 UTC on 19 Jun 2018 a subset of customers using Virtual Machines, Storage, Key Vault, App Service, or Site Recovery in North Europe may experience connection failures when trying to access resources hosted in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:45 UTC on 19 Jun 2018 a subset of customers using Virtual Machines or Storage in North Europe may experience connection failures when trying to access resources hosted in the region. Some Virtual Machines may have also restarted unexpectedly. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Azure Bot Service - Investigating

Starting at 07:15 UTC on 14 Jun 2018 a subset of customers using Azure Bot Service may experience difficulties connecting to bot resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App service (Web Apps) - South Central US

Starting at 15:57 UTC on 13 Jun 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage availability issue which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this. These service may include: Virtual Machines, App Service, and Site Recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App service (Web Apps) - South Central US

We are aware of an issue with App Services (Web Apps) which is currently under investigation. More information will be provided as events warrant.

Last Update: A few months ago

Visual Studio Team Services and Visual Studio App Center

Starting at approximately 03:37 UTC on 12 Jun 2018, a limited subset of customers using Visual Studio Team Services in Central US and South Central US may experience delays and failures of VSTS build using Hosted Agent pools in Central US. More information can be found on https://aka.ms/vstsblog. Visual Studio App Center customers may also experience impact symptoms such as App Center builds failing to start due to this issue. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services impacted in West Europe

Engineers are investigating an emerging issue including Storage and Virtual Machines in the West Europe region. In addition, Azure Services that have dependency on Storage may experience issue connecting to their resources. Confirmed impacted services are: Storage., Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps, Automation.

Last Update: A few months ago

Emerging issue in West Europe

Engineers are investigating an emerging issue including Storage and Virtual Machines in the West Europe region. More information will be provided as it is known.

Last Update: A few months ago

Azure Active Directory login issues in Europe regions - Mitigated

Summary of impact: Between 21:33 and 23:27 UTC on 09 Jun 2018, a very limited subset of customers experienced issues signing into or accessing Azure Active Directory in the Europe regions.Preliminary root cause: Engineers identified networking issues within and connecting to specific datacenters, preventing AAD requests from connecting to backend services.Mitigation: Engineers performed a failover of the AAD service to alternate datacenter, to mitigate the issue for customers. Next steps: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

App Service - Investigating

A monitoring alert for App Services is being investigated. More information will be provided shortly.

Last Update: A few months ago

App Service - Portal Experience

Starting at 21:46 UTC on 30 May 2018 a subset of customers using App Service, Logic Apps, and Functions may experience errors intermittently viewing App Services resources, or receive error message "This resources is unavailable" when accessing to the resources via Azure portal. In additional, a subset of customers may receive failure notifications when performing service management operations - such as create, update, delete - for resources. Existing runtime resources are not impacted Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - Portal Experience

An investigation with App Service is ongoing. Please refer here for future updates.

Last Update: A few months ago

Virtual Machines - Service Management Issues - USGov Virginia - Applying Mitigation

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineers have identified the preliminary root cause and are applying mitigation. It is recommended to not perform any service management requests, as Virtual Machines will continue to run if no changes are made. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Service Management Failures - USGov Virginia

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineering teams are engaged to investigate the underlying root cause and recommend to not perform any service management requests as Virtual Machines will continue to run if no changes are made. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - USGov Virginia

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineering teams are engaged to investigate the underlying root cause and recommend to not perform any service management requests as Virtual Machines will continue to run if no changes are made. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - USGov Virginia

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia who may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineering teams are engaged to investigate the underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL DB - West Europe - Investigating

Starting at 05:00 UTC a subset of customers using SQL DB in West Europe may experience intermittent increases in latency, timeouts and/or connectivity issues when accessing databases in this region. Engineers are aware of this issue and are actively investigating. Some customers may see recovery. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

App Service

Starting at 18:20 UTC on 23 May 2018 a subset of customers using App Service may experience intermittent latency or timeouts when viewing resources in the Azure Portal. Some customers may also see errors when performing service management operations such as site create, delete and move resources on App Service (Web, Mobile and API Apps) applications. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services

Starting at 14:55 UTC on 22 May 2018 a subset of customers using Visual Studio Team Services in multiple regions may experience degraded performance and slowness while accessing accounts or navigating through Visual Studio Online workspaces. More information can be found on https://aka.ms/vstsblog Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Issues Authenticating to Azure resources via Microsoft account (MSA)

Starting at 02:19 on 22 May 2018, a subset of customers attempting to login to their Microsoft Account (MSA) using Azure Active Directory may experience intermittent difficulties when attempting to authenticate into resources which are dependent on Azure Active Directory. In addition, Visual Studio Team Services (VSTS) customers may experience errors when attempting to login to VSTS portal through MSA usinghttps://<Accountname>.visualstudio.com - Engineers have identified a possible underlying cause and are assessing appropriate mitigation options. The next update will be provided in 60 minutes, or as events warrant. - Engineers have identified a possible underlying cause and are assessing appropriate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Issues Authenticating to Azure resources via Microsoft account (MSA)

Starting at 02:26 on 22 May 2018, a subset of customers attempting to login to their Microsoft Account (MSA) using Azure Active Directory may experience intermittent difficulties when attempting to authenticate into resources which are dependent on Azure Active Directory. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Japan East - Service Management Operations Issues with Virtual Machines

An investigation with Virtual Machines in Japan East was completed. This issue has been mitigated. A Resolution Statement will be provided within 30 minutes or as events warrant.

Last Update: A few months ago

Visual Studio App Center - Distribution Center

An investigation with Visual Studio App Center is ongoing. An update will be provided shortly.

Last Update: A few months ago

Warning Investigating Alerts - Storage - West Central US

Starting at 19:47 UTC on 03 May 2018 a subset of customers using Storage in West Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Investigating Alerts - Networking - West Central US

Engineers are investigating alerts for multiple services in West Central US. More information will be provided as it is available.

Last Update: A few months ago

App Service - Canada Central - Applying Fix

Starting at 12:05 UTC on 26 Apr 2018 a subset of customers using App Service in Canada Central who may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Engineers have identified a possible underlying cause, and are applying potential mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Traffic Manager - North Central US

Starting at 12:46 UTC on 26 Apr 2018 a subset of customers using Traffic Manager may encounter intermittent issues when connecting to resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Service Bus - West Europe - Investigating

Starting at approximately 14:00 UTC on 19 Apr 2018 a subset of customers using Service Bus in West Europe may experience intermittent timeouts or errors when connecting to Service Bus queues and topics in this region. Engineers have determined the underlying cause and are currently exploring mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Content Delivery Network Connectivity

Engineers are investigating a CDN connectivity issue. More information will be provided shortly.

Last Update: A few months ago

Issues Performing Service Management Operations - Australia East/Southeast

Starting at approximately 21:00 UTC on 15 Apr 2018, you have been identified as a customer in Australia East and Australia Southeast who may be unable to view resources via the Azure portal or programmatically and may be unable to perform service management operations due to this. Service availability for those resources is not affected by this issue. Engineers have identified a back-end storage component as a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory B2C - Multiple Regions | Applying Mitigation

Starting at 19:41 UTC on 09 Apr 2018 customers using Azure Active Directory B2C in multiple regions may experience client side authorization request failures when connecting to resources. Customers attempting to access services may receive a client side error - "HTTP Error 503. The service is unavailable" - when attempting to login. Engineers have identified a possible fix for the underlying cause, and are working to implement across all regions. Some customers may already be seeing improvements as the fix rolls out. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Authentication Errors - Investigating

Starting at 08:30 UTC on 06 Apr 2018 a subset of customers using Azure Active Directory in East Asia and Europe may experience difficulties when attempting to authenticate into resources which are dependent on Azure Active Directory. Engineers have applied mitigation and are currently verifying service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Authentication Errors - Investigating

Starting at 08:30 UTC on 06 Apr 2018 a subset of customers using Azure Active Directory in East Asia and Europe may experience difficulties when attempting to authenticate into resources which are dependent on Azure Active Directory. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Authentication Errors - Investigating

An alert for Azure Active Directory is being investigated for East Asia. Engineers are aware and are actively investigating. Further updates will be available in 60 minutes.

Last Update: A few months ago

Automation - West Europe | Investigating

Starting at 07:00 UTC on 13 Mar 2018, customers using Automation in West Europe may observe delays when running new or scheduled jobs in the region. Customers utilizing the update management solution may also experience impact. Engineers are continuing to explore mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Automation - West Europe | Investigating

Starting at 07:00 UTC on 13 Mar 2018, customers using Automation in West Europe may observe delays when running new or scheduled jobs in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - Data Processing in East US

An alert for Log Analytics in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure portal - Investigating degraded performance

Starting at approximately 08:00 UTC on 01 Mar 2018, a subset of customers in West Europe and North Europe may experience degraded performance when navigating the Azure Management Portal. Some portal blades may be slow to load or commands may time out. Customers may also experience slow execution times when running Powershell commands. Engineers have identified a potential root cause and are currently implementing their mitigation plan. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - Investigating degraded performance

Starting at approximately 08:00 UTC on 01 Mar 2018, a subset of customers in West Europe and North Europe may experience degraded performance when navigating the Azure Management Portal. Some portal blades may be slow to load or commands may time out. Customers may also experience slow execution times when running Powershell commands. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Storage - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers in UK South may experience difficulties connecting to resources hosted in this region. Impacted services include Storage, Virtual Machines, Azure Search and Backup. Customers may begin seeing signs of mitigation. Engineers are investigating a potential power event in the region impacting a single storage scale unit and are actively working on mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers in UK South may experience difficulties connecting to resources hosted in this region. Impacted services include Storage, Virtual Machines and Azure Search. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers using Virtual Machines in UK South may experience difficulties connecting to resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - UK South

Engineers are investigating alerts for Virtual Machines in UK South. Additional information will be provided shortly.

Last Update: A few months ago

Network Infrastructure - West Europe

Starting at 15:48 UTC on 15 Feb 2018, a subset of customers in West Europe who may experience difficulties connecting to resources hosted in this region. This may include Virtual Machines, Visual Studio Team Services, App Services, Azure Search, API Management, and Azure Redis Cache. Engineers are performing service recovery actions to return these services to a healthy state. Engineers have confirmed that the following services are now healthy: SQL Database, Storage, and IoT Central. Engineers are seeing evidence of services returning to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe

Starting at 15:48 UTC on 15 Feb 2018, a subset of customers in West Europe who may experience difficulties connecting to resources hosted in this region. This may include Logic Apps, Visual Studio Team Services, App Services, Azure Search, API Management, and Azure Redis Cache. Engineers are currently validating service platform health and have confirmed that the following services are now healthy: SQL Database, Virtual Machines, Storage, and IoT Central. Engineers are seeing evidence of services returning to a healthy state. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Network Infrastructure - West Europe

Starting at 15:40 UTC on 15 Feb 2018, a subset of customers in West Europe may experience difficulties connecting to resources hosted in this region. This may include SQL Database, IoT Central, Visual Studio Team Services, App Services, Azure Search, Virtual Machines, Azure Redis Cache and Storage. Engineers are currently validating service platform health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe

Starting at 15:40 UTC on 15 Feb 2018 a subset of customers in West Europe may experience difficulties connecting to resources hosted in this region. This may include Web Apps, SQL Database and Storage. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe

Starting at 15:40 UTC on 15 Feb 2018 a subset of customers in West Europe who may experience difficulties connecting to resources hosted in this region. This may include Web Apps, SQL Database and Storage. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe

Engineers are investigating alerts in West Europe. This may affect App Services, Storage and SQL Database. Additional information will be provided shortly.

Last Update: A few months ago

Service Bus - West US

Starting at 13:05 UTC on 27 Jan 2018 a subset of customers using Service Bus in West US may experience difficulties connecting to resources hosted in this region. Customers trying to connect to Service Bus Queues Topics or Event Hubs may experience internal server errors or potential timeouts. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Service Bus - West US

Starting at 13:05 UTC on 27 Jan 2018 a subset of customers using Service Bus or Event Hubs in West US may experience difficulties connecting to resources hosted in this region. Customers trying to connect to Service Bus Queues Topics or Event Hubs may experience internal server errors or potential timeouts. Engineers are aware of this issue and are actively investigating. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Service Bus - West US

Starting at 13:05 UTC on 27 Jan 2018 a subset of customers using Service Bus in West US may experience difficulties connecting to resources hosted in this region. Customers trying to connect may experience internal server errors or potential timeouts. Engineers are aware of this issue and are actively investigating. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Service Bus - West US

An alert for Service Bus in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Service Bus - West US

Starting at 07:10 UTC on 27 Jan 2018 a subset of customers using Service Bus in West US may experience timeouts or errors when accessing Service Bus queues hosted in this region. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - East US

Starting at 08:02 UTC on 24 Jan 2018 a subset of customers using App Services in East US who may experience intermittent latency, timeouts or HTTP 500-level response codes while performing service management operations such as site create / delete and moving resources on App Service deployments. Auto-scaling and loading site metrics may also be impacted. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - East US

Starting at 08:02 UTC on 24 Jan 2018 a subset of customers using  App Service \ Web Apps in East US who may experience intermittent latency, timeouts or HTTP 500-level response codes while performing service management operations such as site create / delete and moving resources on App Service deployments. Autoscaling and loading site metrics may also be impacted. Engineers continue to investigate to isolate the underlying cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - East US

Starting at 08:02 UTC on 24 Jan 2018 a subset of customers using App Service \ Web Apps in East US may experience intermittent latency or receive timeouts, HTTP 500-level response codes while performing service management operations such as site create, delete and move resources on App Service deployments. Auto-scaling and loading site metrics may also be impacted. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - Service management issues

SUMMARY OF IMPACT: Between 20:00 UTC on 22 Jan 2018 and 21:30 UTC on 23 Jan 2018, a subset of customers may have encountered the error message "The resource you are looking for has been removed, had its name changed, or is temporarily unavailable" when viewing the "Application Settings" blade under App Services in the Management Portal. This error would have prevented customers from performing certain service management operations on their existing App Service plan. During the impact window, existing App Services resources should have remained functioning in the state they were in. PRELIMINARY ROOT CAUSE: Engineers determined that a recently deployed update led to these issues. MITIGATION: Engineer deployed a hotfix in order to address the impact. NEXT STEPS: Engineers will review deployment procedures to prevent future occurrences.

Last Update: A few months ago

App Service - Service management issues

Engineers are aware of a recent issue for App Service which has now been mitigated. More information will be provided shortly.

Last Update: A few months ago

Active Advisory

Starting at 20:00 UTC on 22 Jan 2018, a subset of customers using App Services in multiple regions may have intermittently encounter the following error when attempting service management operations for their App Services: "the resource you are looking for has been removed, had its name changed, or is temporarily unavailable." Engineers have deployed a fix to mitigate this issue.

Last Update: A few months ago

Active Advisory

Starting at 2000 UTC on 22 Jan 2018 a subset of customers using App Services in multiple regions may intermittently encounter the following error when attempting service management operations for their App Services: "the resource you are looking for has been removed, had its name changed, or is temporarily unavailable." This issue does not impact existing App Service runtime operations. Engineers have identified root cause and are deploying a fix. The next update will be provided upon mitigation.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 08:00 UTC on 19 Jan 2018, a subset of customers with resources hosted in West Europe may experience connection failures or time out errors when accessing their resources. Customers may also experience higher than expected network latency when accessing Virtual Machines / Storage accounts or receive failure notifications when performing service management operations - such as create, update, delete. Engineers have determined the possible underlying cause and are implementing mitigation steps. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 08:00 UTC on 19 Jan 2018, a subset of customers with resources hosted in West Europe may experience connection failures or time out errors when accessing their resources. Customers may also experience higher than expected network latency when accessing Virtual Machines or Storage accounts. Some customers may also receive failure notifications when performing service management operations - such as create, update, delete. Engineers are investigating a possible network latency issue between a subset of compute scale units and storage scale units. Engineers are continuing to work on mitigation options. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 08:00 UTC on 19 Jan 2018, a subset of customers with resources hosted in West Europe may experience connection failures or time out errors when accessing their resources. Customers may also experience higher than expected network latency when accessing Virtual Machines or Storage accounts. Engineers are investigating a possible network latency issue between a subset of compute scale units and storage scale units. Engineers are working to determine mitigation options. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 08:00 UTC on 19 Jan 2018, a subset of customers with resources hosted in West Europe may experience connection failures or time out errors when accessing their resources. Customers may also experience higher than expected network latency when accessing Virtual Machines or Storage accounts. Engineers are investigating a possible network latency issue between a subset of compute scale units and storage scale units. Engineers are working to determine mitigation options. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Active Advisory

Engineers are investigating alerts for Multi-Factor Authentication. Starting at 21:52 UTC on 18 Jan 2018 new users attempting to register the Microsoft Authenticator app on iOS, Android, or Windows Phone may experience intermittent registration failures. This issue is only impacting a small subset of Azure MFA customers. On-premises MFA Server is not impacted.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a limited subset of customers dependent on a storage services in West US may experience latency or failures connecting to certain resources. In addition to the storage service, impacted services which leverage the service include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers actively investigating the impacted storage services and developing mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a subset of customers in West US may experience difficulties connecting to certain resources or latency. Impacted services include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers are aware of this issue and are actively investigating a potential underlying Storage issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - West US

An alert for App Service in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Service Bus - Australia East

Starting at 04:00 UTC on 13 Dec 2017, a limited subset of customers using Service Bus and Event Hubs in Australia East may experience intermittent issues when connecting to resources from the Azure Management Portal or programmatically. Services offered within Service Bus, including Azure Service Bus Queue and Service Bus Topics may also be affected. Engineers continue to apply mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Service Bus - Australia East

Starting at 04:00 UTC on 13 Dec 2017, a subset of customers using Service Bus in Australia East may experience intermittent issues when connecting to resources from the Azure Management Portal or programmatically. Engineers are aware of this issue and are actively applying mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Service Bus - Australia East

An alert for Service Bus in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

App Service - West Europe

Starting at 11:58 UTC on 04 Dec 2017, engineers have identified that a limited subset of customers using App Service in West Europe may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Customers who are impacted can find additional details in their management portal (https://portal.azure.com).

Last Update: A few months ago

App Service - West Europe

Starting at 11:58 UTC on 04 Dec 2017, engineers have identified that a limited subset of customers using App Service in West Europe may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Customers who are impacted can find additional details in their management portal (https://portal.azure.com).

Last Update: A few months ago

App Service - West Europe

Starting at 11:58 UTC on 04 Dec 2017, engineers have identified that a limited subset of customers using App Service in West Europe may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Customers who are impacted can find additional details in their management portal (https://portal.azure.com).

Last Update: A few months ago

App Service - West Europe

Starting at 11:58 UTC on 04 Dec 2017, engineers have identified that a limited subset of customers using App Service in West Europe may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Customers who are impacted can find additional details in their management portal (https://portal.azure.com).

Last Update: A few months ago

App Service - West Europe

Starting at 11:58 UTC on 04 Dec 2017, engineers have identified that a limited subset of customers using App Service in West Europe may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Customers who are impacted can find additional details in their management portal (https://portal.azure.com).

Last Update: A few months ago

App Service - West Europe

Engineers have identified that a limited subset of customers using App Service in West Europe may receive HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Customers who are impacted can find additional details in their management portal (https://portal.azure.com).

Last Update: A few months ago

Storage - Multiple Regions

An alert for Storage in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - Multiple Regions

Starting at 18:28 UTC on 21 Nov 2017 a subset of customers may receive failure notifications when performing service management operations - such as create, update, delete - when attempting to manage their Storage Accounts. Retries of these operations may succeed. Azure Monitor customers may also see impact to API calls to turn on diagnostic settings. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

SUMMARY OF IMPACT: Between 05:40 and 11:45 on 19 Nov 2017, a subset of customers in North Central US may have experienced difficulties connecting to resources hosted in this region due to an underlying Infrastructure event which is now mitigated. Downstream impact was seen by some Azure services that rely on Networking, including: Virtual Machines, App Services, Visual Studio Team Servies, Key Vault and Scheduler, manifesting in increased latency and timeouts for some customers during this impact window. PRELIMINARY ROOT CAUSE: Engineers determined that a network device experienced a hardware fault and that network traffic was not automatically rerouted. MITIGATION: Engineers took the faulty network device out of rotation and rerouted network traffic to mitigate the issue. NEXT STEPS: Engineers will continue to investigate the full root cause and a report will be published in approximately 48-72 hours.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at 05:40 UTC on 19 Nov 2017 a subset of customers in North Central US may experience difficulties connecting to resources hosted in this region due to an underlying Infrastructure event which is currently under investigation. Azure Services that rely on Networking may experience downstream impact, this includes: Virtual Machines, App Services, VSTS, Key Vault and Scheduler, manifesting in increased latency and timeouts for some customers. Engineers have determined the underlying cause and are exploring mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at 05:40 UTC on 19 Nov 2017 a subset of customers in North Central US may experience difficulties connecting to resources hosted in this region due to an underlying Infrastructure event which is currently under investigation. Azure Services that rely on Networking may experience downstream impact, this includes: Virtual Machines, App Services, VSTS, Key Vault and Scheduler, manifesting in increased latency and timeouts for some customers. Engineers have determined the underlying cause and are exploring mitigation options. Some customers may see recovery of some services. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at 05:40 UTC on 19 Nov 2017 a subset of customers in North Central US may experience difficulties connecting to resources hosted in this region due to an underlying Infrastructure event which is currently under investigation. Azure Services that rely on Networking may experience downstream impact. Some customers using App Services and VSTS may experience latency or timeouts. Engineers are continuing to investigate the underlying cause. Some customers may be starting to see signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at 05:40 UTC on 18 Nov 2017 a subset of customers in North Central US may experience difficulties connecting to resources hosted in this region due to an underlying Infrastructure event which is currently under being investigated. Azure Services that rely on Networking may experience downstream impact. Some customers using App Services and VSTS may experience latency or timeouts. Engineers are investigating to establish the underlying cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Infrastructure - North Central US

Starting at 05:40 UTC on 18 Nov 2017 a subset of customers in North Central US may experience difficulties connecting to resources hosted in this region due to an underlying Infrastructure event which is currently under being investigated. Azure Services that rely on Networking may experience downstream impact. App Services and VSTS may experience latency or timeouts. Engineers are investigating to establish the underlying cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Central US

Starting at 05:40 UTC on 18 Nov 2017 a subset of customers using Virtual Machines in North Central US may experience difficulties connecting to resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Central US

An alert for Virtual Machines in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Service Bus - West Europe

Engineers are aware of current issues for a subset of customers using Service Bus in West Europe who may intermittently experience timeouts or errors when accessing Service Bus queues and topics hosted in this region. Customers impacted by this issue will receive the latest updates via targeted messaging within their Management Portal: https://portal.azure.com .

Last Update: A few months ago

Service Bus - West Europe

Engineers are investigating alerts for Service Bus in West Europe. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - North Europe and West Europe

Starting at 04:45 UTC on 11 Nov 2017, customers using SQL Database in West Europe and North Europe may experience issues provisioning new databases and managing existing databases via the Azure Management Portal (https://portal.azure.com) and Powershell. During this time, customers may also experience higher than normal latency resulting in errors or timeouts ("Gateway Timeout"). Customers using HDInsight may also receive deployment failure notifications for the creation of new HDInsight clusters in these regions. Engineers have identified a possible fix and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

SQL Database - North Europe and West Europe

Starting at 04:45 UTC on 11 Nov 2017, customers using SQL Database in West Europe and North Europe may experience issues provisioning new databases and managing existing databases via the Azure Management Portal (https://portal.azure.com) and Powershell. During this time, customers may experience higher than normal latency resulting in errors or timeouts ("Gateway Timeout"). Customers using HDInsight may also receive deployment failure notifications for the creation of new HDInsight clusters in these regions. Engineers have identified a possible fix and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

SQL Database - North Europe and West Europe

Starting at 04:45 UTC on 11 Nov 2017, customers using SQL Database in West Europe and North Europe may experience issues provisioning new databases and managing existing databases via the Azure Management Portal (https://portal.azure.com) and Powershell. During this time, customers may experience higher than normal latency resulting in errors or timeout ("Gateway Timeout"). Customers using HDInsight may also receive deployment failure notifications for the creation of new HDInsight clusters in these regions. Engineers have identified a possible fix for the underlying cause and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

SQL Database - North Europe and West Europe

Starting at 04:45 UTC on 11 Nov 2017, customers using SQL Database in West Europe and North Europe may experience issues provisioning new databases and managing existing databases via the Azure Management Portal (https://portal.azure.com) and Powershell, resulting in an error or timeout ("Gateway Timeout"). Customers may also experience HTTP 500 errors. Customers using HDInsight may also receive deployment failure notifications for the creation of new HDInsight clusters in these regions.Engineers continue to investigate a possible underlying cause and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Europe and West Europe

Starting at 04:45 UTC on 11 Nov 2017, customers using SQL Database in West Europe and North Europe may experience issues provisioning new databases and managing existing databases via the Azure Management Portal (https://portal.azure.com) and Powershell, resulting in an error or timeout ("Gateway Timeout"). Customers may also experience HTTP 500 errors. Customers using HDInsight may also receive deployment failure notifications for the creation of new HDInsight clusters in these regions. Engineers are aware of this issue and are continuing to investigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Europe and West Europe

Starting at approximately 06:02 UTC on 11 Nov 2017, some customers using SQL Database in West Europe and North Europe may experience issues provisioning new databases and managing existing databases, via the Azure Management Portal (https://portal.azure.com) and Powershell, resulting in an error or timeout ("Gateway Timeout"). Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Europe and West Europe

An alert for SQL Database in North Europe and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - West Europe

An alert for SQL Database in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - North Europe and West Europe

Starting at 09:21 UTC on 10 Nov 2017 a subset of customers using Virtual Machines and Redis Cache in North Europe and West Europe may experience difficulties connecting to resources hosted in these regions. Engineers have identified a possible fix, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe and West Europe

Starting at 09:21 UTC on 10 Nov 2017 a subset of customers using Virtual Machines in North Europe and West Europe may experience difficulties connecting to resources hosted in these regions. Engineers have identified a possible fix, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe and West Europe

An alert for Virtual Machines in North Europe and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

DevTestLab - Multi Region

Starting at 18:00 UTC on 06 Nov 2017, a subset of customers using Azure DevTest Labs may be unable to view existing Labs resources in their subscription. Existing runtime Labs are not impacted. Engineers have identified a potential root cause as an unhealthy backend service, and are in the process of applying mitigation. As a workaround, subscription owners are able to restore visibility of the impacted labs to lab users. Details on how to do this can be found in the impacted customers management portal, within Azure Service Health. The next update will be in 4 hours or as events warrant.

Last Update: A few months ago

DevTestLab - Multi Region

Starting at 18:00 UTC on 06 Nov 2017, a subset of customers using Azure DevTest Labs may be unable to view existing Labs resources in their subscription. Existing runtime Labs are not impacted. Engineers have identified a potential root cause as an unhealthy backend service, and are in the process of applying mitigation. As a workaround, subscription owners are able to restore visibility of the impacted labs to lab users. Details on how to do this can be found in the impacted customers management portal, within Azure Service Health. The next update will be in 4 hours or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are also aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying network infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers continue investigating possible underlying causes, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US and South Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are currently investigating previous updates and deployments to the region along with other possible network level issues, and are taking additional steps to mitigate impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:45 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

An alert for Network Infrastructure in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Alert - North Central US

Engineers are investigating alerts in North Central US. Additional information will be provided as it becomes available.

Last Update: A few months ago

Service Bus - East US

An alert for Service Bus in East US is being investigated.

Last Update: A few months ago

Service Bus - East US

An alert for SHD Banner in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - UK West

Starting at 04:07 UTC on 05 Nov 2017 a subset of customers using SQL Database in UK West may experience difficulties connecting to resources in this region. New connections to existing databases in this region may result in an error or timeout. Customers may also receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers have identified the underlying cause and are in the process of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - UK West

Starting at 04:07 UTC on 05 Nov 2017 a subset of customers using SQL Database in UK West may experience difficulties connecting to resources in this region. New connections to existing databases in this region may result in an error or timeout. Customers may also receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers have identified the preliminary root cause and are in the process of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - UK West

Starting at 04:07 UTC on 05 Nov 2017 a subset of customers using SQL Database in UK West may experience difficulties connecting to resources in this region. Customers may also receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers have identified the preliminary root cause and are in the process of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - UK West

Starting at 04:07 UTC on 05 Nov 2017 a subset of customers using SQL Database in UK West may experience difficulties connecting to resources in this region. Customers may also receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - UK West

An alert for SQL Database in UK West is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customers may have experienced issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Engineers have applied mitigation and are in the final stages of validating that there is no further customer impact.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customers may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Engineers are observing signs of recovery and the next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customers may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact such as Virtual Machines, Cloud Services, Backup, Azure Site Recovery, VSTS Load Testing and Azure Search. Retries may be successful. Impact for this issue is limited to Service Management functions and Service Availability for existing resources should not be affected. Engineers are implementing steps to mitigate the incident. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on 2nd November 2017, a subset of customer may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact. Virtual Machines and Cloud Services customers may experience intermittent failures when attempting to provision resources. Azure Backup and Azure Site Recovery may also experience failures. VSTS Load Testing customers may see load test failures. Impact for this issue is limited to Service Management functions, and existing resources should not be affected. Engineers have determined the underlying issue and are exploring mitigation options. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on 2nd November 2017, a subset of customer may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact. Virtual Machines and Cloud Services customers may experience intermittent failures when attempting to provision resources. Azure Backup and Azure Site Recovery may also experience failures. VSTS Load Testing customers may see load test failures. Impact for this issue is limited to Service Management functions, and existing resources should not be affected. Engineers have determined the underlying issue and are exploring mitigation options. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customer may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact. Virtual Machines and Cloud Services customers may experience intermittent failures when attempting to provision resources. Azure Backup and Azure Site Recovery may also experience failures. VSTS Load Testing customers may see load test failures. Impact for this issue is limited to Service Management functions, and existing resources should not be affected. Engineers have determined the underlying issue and are exploring mitigation options. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, due to an underlying Storage incident, Azure services that leverage Storage may experience impact. Virtual Machines or Cloud Services customers may experience failures when attempting to provision resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Customers leveraging Azure Backup may experience replication failures. Engineers are currently investigating to determine the underlying root cause. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Multiple Regions

Starting at 11:40 UTC on 02 Nov 2017, customers may experience errors when attempting to carry out management operations - such as create, update, delete - on their Storage resources. Engineers are investigating and the next update will be provided in 60 minutes.

Last Update: A few months ago

Azure IoT Hub - Multiple Regions

Starting at 21:15 UTC on 31 Oct 2017 a subset of customers using Azure IoT Hub in multiple regions may experience difficulties connecting to resources hosted in these regions using the MQTTS and AMQPS protocols. HTTPS will not be impacted. Customers using IoT Suite may also be impacted. Engineers are currently deploying a mitigation and customers should start seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure IoT Hub - Multiple Regions

Starting at 21:15 UTC on 31 Oct 2017 a subset of customers using Azure IoT Hub in multiple regions may experience difficulties connecting to resources hosted in these regions. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure IoT Hub - Multiple Regions

Starting at 21:15 UTC on 31 Oct 2017 a subset of customers using Azure IoT Hub in multiple regions may experience difficulties connecting to resources hosted in these regions. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure IoT Hub - Multiple Regions

An alert for Azure IoT Hub in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

HDInsights - South Central US

Starting at 14:59 UTC on 30 Oct 2017 a subset of customers using HDInsight in South Central US may receive deployment failure notifications when creating new HDInsight clusters in this region. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. As a workaround, customers can deploy HDInsight Clusters in other US regions. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is partially mitigated. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have enacted mitigation for the primary root cause and are currently assessing and mitigating residual impact. Customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation.Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 10:37 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation.Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 10:37 UTC on 26 Oct 2017 you have been identified as a customer using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation.Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - South Central US

Starting at 18:48 UTC on 11 Oct 2017 a subset of customers using App Service in South Central US may receive intermittent HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile and API Apps) deployments hosted in this region. Engineers have identified a possible fix and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - South Central US

An alert for App Service in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Portal Access Issues

SUMMARY OF IMPACT: Between 08:16 and 15:00 UTC on 10 Oct 2017, customers using Visual Studio Team Services may have experienced difficulties connecting to resources hosted by VisualStudio.com. PRELIMINARY ROOT CAUSE: Engineers suspected that a recent deployment increased the load on servers that handle requests to Shared Platform Services. MITIGATION: Engineers scaled out the number of web roles in the Shared Platform Service to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. More information can be found on https://aka.ms/VSTS49171007

Last Update: A few months ago

Visual Studio Team Services - Portal Access Issues

SUMMARY OF IMPACT: Between 08:16 and 15:00 UTC on 10 Oct 2017, customers using Visual Studio Team Services may have experienced difficulties connecting to resources hosted by VisualStudio.com. PRELIMINARY ROOT CAUSE: Engineers suspected that a recent deployment increased the load on servers that handle requests to Shared Platform Services. MITIGATION: Engineers scaled out the number of web roles in the Shared Platform Service to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. More information can be found on https://aka.ms/VSTS49171007

Last Update: A few months ago

Engineers investigating issues with Visual Studio Team Services

Engineers are investigating an alert for Visual Studio Team Services (VSTS). At this stage for most users should be seeing signs of mitigation. Engineers are still fixing the root cause. More information is available here: https://aka.ms/VSTS49171007

Last Update: A few months ago

Engineers investigating issues with Visual Studio Team Services

Engineers are investigating an alert for Visual Studio Team Services (VSTS). More information is available here: https://aka.ms/VSTS49171007

Last Update: A few months ago

Engineers investigating issues with Visual Studio Team Services

Engineers are investigating an alert for Visual Studio Team Services (VSTS). More information is available here: https://aka.ms/VSTS49171007

Last Update: A few months ago

Engineers investigating issues with Visual Studio Team Services

Engineers are investigating an alert for Visual Studio Team Services (VSTS). More information is available here: https://blogs.msdn.microsoft.com/vsoservice/?p=15176

Last Update: A few months ago

Investigating - Backup and Site Recovery - UK South

Starting at 06:10 UTC on 09 Oct 2017, a subset of customers using Backup and Site Recovery in UK South may receive failure notifications when performing service management operations via Powershell or the Azure Management Portal (https://portal.azure.com), for resources hosted in this region. Engineers suspect that a recent deployment task impacted instances of a backend service which became unhealthy, preventing requests from completing. Engineers are currently working on rolling back this recent deployment task as a potential mitigation path. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Investigating - Backup and Site Recovery - UK South

Starting at 06:10 on 09 Oct 2017, a subset of customers using Backup and Site Recovery in UK South may receive failure notifications when performing service management operations via Powershell or the Azure Management Portal (https://portal.azure.com), for resources hosted in this region. Engineers suspect that a recent deployment task impacted instances of a backend service which became unhealthy, preventing requests from completing. Engineers are currently working on rolling back this recent deployment task as a potential mitigation path. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Investigating - Backup and Site Recovery - UK South

Starting at 06:10 UTC on 09 Oct 2017 a subset of customers using Backup or Site Recovery in UK South may receive failure notifications when performing service management operations, via Powershell or the Azure Management Portal (https://portal.azure.com), for resources hosted in this region. This will not impact ongoing backup and replication. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Investigating - Backup and Site Recovery - UK South

Starting at 06:10 UTC on 09 Oct 2017 a subset of customers using Backup in UK South may receive failure notifications when creating new Backup or Site Recovery resources in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Investigating - Backup & Site Recovery - UK South

Starting at 06:10 UTC on 09 Oct 2017 a subset of customers using Backup or Site Recovery in UK South may receive failure notifications when creating new Backup or Site Recovery resources in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Backup - UK South

An alert for Azure Backup in UK South is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Infrastructure issue impacting multiple Azure services - Australia East

Engineers are aware of a recent issue for Infrastructure in Australia East which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - Australia East

An issue with Network Infrastructure in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines and App Service - Australia East

An alert for Virtual Machines and App Service in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Cloud Shell - Authentication Issues

Starting at approximately 04:30 UTC on 04 Oct 2017, a subset of customers using Cloud Shell may experience the following authentication error when running Azure CLI command: "A Cloud Shell credential problem occurred. When you report the issue with the error below, please mention the hostname 'host-name'. Could not retrieve token from local cache." Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Cloud Shell - Authentication Issues

Starting at approximately 04:30 UTC on 04 Oct 2017, a subset of customers using Cloud Shell may experience the following authentication error when running Azure CLI command: "A Cloud Shell credential problem occurred. When you report the issue with the error below, please mention the hostname ''. Could not retrieve token from local cache." Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Cloud Shell - Authentication Issues

Starting at approximately 04:30 UTC on 04 Oct 2017, a subset of customers using Cloud Shell may experience the following authentication error when running Azure CLI command: "A Cloud Shell credential problem occurred. When you report the issue with the error below, please mention the hostname ''. Could not retrieve token from local cache." Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Cloud Shell - Authentication Issues

Starting at approximately 04:30 UTC on 03 Oct 2017, a subset of customers using Cloud Shell may experience the following authentication error when running Azure CLI command: "A Cloud Shell credential problem occurred. When you report the issue with the error below, please mention the hostname ''. Could not retrieve token from local cache." Engineers are aware of this issue and are actively applying tentative mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Cloud Shell - Authentication Issues

An alert for Azure Cloud Shell is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage Related Incident - North Europe

SUMMARY OF IMPACT: Between 13:27 and 20:15 UTC on 29 Sep 2017, a subset of customers in North Europe may have experienced difficulties connecting to resources hosted in this region due to availability loss of a Storage scale unit. Services that depend on the impacted Storage resources in this region that may have seen impact are Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Time Series Insights, Stream Analytics, HDInsight, Data Factory and Azure Scheduler. PRELIMINARY ROOT CAUSE: Engineers have determined that this was the result of a facility issue that resulted in physical node reboots as a precautionary measure. The nodes impacted were primarily from a single storage stamp. Recovery took longer than expected, and the full Public RCA will include details on why these nodes did not recover more quickly. MITIGATION: Engineers manually checked the resources in the data center and initiated a restart of the Storage nodes that were impacted. NEXT STEPS: Engineers are still assessing any residual customer impact as well as understanding the cause for the initial event. Any residual impacted customers will be contacted via their management portal (https://portal.azure.com).

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a limited subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which remains under investigation. Virtual Machines and Storage remain impacted with a very limited amount of customers experiencing issues. Application Insights, Azure Search, Azure Monitor, Redis Cache, Azure Site Recovery, Data Factory, Azure Scheduler, HDInsight, Azure Backup, App Services\Web Apps, Stream Analytics, Cloud Services and Azure Functions are reporting recovery. Engineers are continuing to work on recovering the remaining two services. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, and Azure Functions. Application Insights, Azure Search, Azure Monitor, Redis Cache, Azure Site Recovery, Data Factory, Azure Scheduler, HDInsight and Stream Analytics are reporting recovery. Engineers are attempting alternative mitigation steps in attempt to recover the remaining unhealthy storage machines and services. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Stream Analytics, HDInsight, Data Factory and Azure Scheduler. Media Services, Application Insights, Azure Search and Azure Site Recovery are reporting recovery. Engineers are seeing signs of recovery and are continuing to recover the remaining unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are seeing signs of recovery and are continuing to recover unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are continuing to recover unhealthy storage machines in order to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Microsoft Intune, Application Insights, Azure Functions, Stream Analytics and Media Services. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe

An alert for Virtual Machines in North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

App Service - South Central US

Starting at 22:52 UTC on 23 Sep 2017 a subset of customers using App Service in South Central US may experience intermittent latency or receive timeouts, HTTP 500-level response codes while performing service management operations such as site create, delete and move resources on their App Service applications. This issue has been mitigated and the impacted Azure service has returned to a healthy state. A resolution statement will be provided within 60 minutes.

Last Update: A few months ago

App Service - South Central US

Starting at 22:52 UTC on 23 Sep 2017 a subset of customers using App Service in South Central US may experience intermittent latency or receive timeouts, HTTP 500-level response codes while performing service management operations such as site create, delete and move resources on App Service applications. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service - South Central US

An alert for App Service in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Unable to Access Azure Management Portal

Starting at 13:41 UTC on 21 Sep 2017 a subset of customers using Microsoft Azure portal may intermittently be unable to log into the Azure Management Portal (https://portal.azure.com). Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Investigating Alerts

We are investigating customer reports of issues with https://portal.azure.com when using the Chrome browser. As a workaround, customers can use a different browser to access the Azure Portal.

Last Update: A few months ago

Azure Active Directory Domain Services - Issues enabling and disabling AAD

Our investigation of alerts for Azure Active Directory Domain Services in multiple regions is complete. Due to the extremely limited number of customers impacted by this issue, we are providing direct communication to those experiencing an issue via the Azure Management Portal (https://portal.azure.com and https://manage.windowsazure.com).

Last Update: A few months ago

Azure Active Directory Domain Services - Issues enabling and disabling AAD

Starting at 16:30 UTC on 06 Sep 2017 a very limited subset of customers using Azure Active Directory Domain Services in multiple regions may receive failure notifications when performing service management operations - such as create, update, delete or change configuration - for resources hosted in these regions. Existing deployments should continue to work as expected. Engineers are continuing to investigate impact and are continuing to deploy a fix to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory Domain Services - Issues enabling and disabling AAD

Starting at 16:30 UTC on 06 Sep 2017 a subset of customers using Azure Active Directory Domain Services in multiple regions may receive failure notifications when performing service management operations - such as create, update, delete or change configuration - for resources hosted in these regions. Existing deployments should continue to work as expected. Engineers made changes to a solution previously deployed to a test region for analysis. The updated fix has been redeployed to a test region and engineers are verifying that the updated fix effectively mitigates this issue in production. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory Domain Services - Issues enabling and disabling AAD

Starting at 16:30 UTC on 06 Sep 2017 a subset of customers using Azure Active Directory Domain Services in multiple regions may receive failure notifications when performing service management operations - such as create, update, delete or change configuration - for resources hosted in these regions. Existing deployments should continue to work as expected. Engineers have deployed a solution to a test region for analysis and are currently verifying that the fix effectively mitigates this issue in production. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory Domain Services - Issues enabling and disabling AAD

Starting at 16:30 UTC on 06 Sep 2017 a subset of customers using Azure Active Directory Domain Services in multiple regions may receive failure notifications when performing service management operations - such as create, update, delete or change configuration - for resources hosted in these regions. Existing deployments should continue to work as expected. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Investigating - ExpressRoute \ ExpressRoute Circuits

Starting at 09:07 UTC on 04 Sep 2017 a subset of customers using ExpressRoute \ ExpressRoute Circuits in the Washington DC region may experience difficulties connecting to their resources over their networks relying on Microsoft peering. Engineers are aware of this issue and are continuing to actively investigate this issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Investigating - ExpressRoute \ ExpressRoute Circuits

Starting at 09:07 UTC on 04 Sep 2017 a subset of customers using ExpressRoute \ ExpressRoute Circuits in the Washington DC region may experience difficulties connecting to their Azure resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Investigating - ExpressRoute \ ExpressRoute Circuits

An alert for ExpressRoute \ ExpressRoute Circuits is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Active Directory - North Europe

Starting at 07:23 UTC on 04 Sep 2017 a subset of customers using Azure Active Directory may experience difficulties when attempting to authenticate into My App resources which are dependent on Azure Active Directory IAM services. Engineers have applied and are currently verifying a fix for this issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Multiple Regions

Starting at 07:23 UTC on 04 Sep 2017 a subset of customers using Azure Active Directory may experience difficulties when attempting to authenticate into resources which are dependent on Azure Active Directory IAM services. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Emerging issue under investigation - West Europe

Engineers are investigating alerts for Azure Active Directory in West Europe. Additional information will be provided shortly.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts in West Europe. Additional information will be provided shortly.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts for Azure CDN. Additional information will be provided shortly.

Last Update: A few months ago

Azure Active Directory -Multi-Region

Engineers are aware of a recent issue for Azure Active Directory which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Azure Active Directory B2C - Europe

Starting at 09:10 UTC on 29 Aug 2017 a subset of customers using Azure Active Directory B2C may experience difficulties when attempting to authenticate into resources which are dependent on Azure Active Directory, this may manifest in 504 errors. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts for AAD in North Europe. Additional information will be provided shortly.

Last Update: A few months ago

Service Fabric, SQL Database, Azure IoT Hub, HDInsight and SQL Data Warehouse - West Central US

An alert for SQL Database, SQL Data Warehouse, Service Fabric, HDInsight and Azure IoT Hub in West Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - West Europe

Starting at 11:13 UTC on 21 Aug 2017 you have been identified as a customer using SQL Database in West Europe who may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - West Europe

An alert for SQL Database in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - West US

Starting at 18:50 UTC on 17 Aug 2017 a subset of customers using Storage in West US may experience intermittent difficulties connecting to Storage resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this issue. Engineers have deployed a fix and are validating recovery. Customers should begin to experience improvements. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 18:50 UTC on 17 Aug 2017 a subset of customers using Storage in West US may experience intermittent difficulties connecting to Storage resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this issue. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

An alert for Storage in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

HDInsight - Japan East

Starting at 05:30 UTC on 17 Aug 2017 a subset of customers using HDInsight in Japan East may receive failure notifications when performing service management operations - such as create or scale for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

HDInsight - Japan East

An alert for HDInsight in Japan East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure portal - Multi-Region

Starting at 18:08 UTC on 16 Aug 2017 users deploying new management certificates or viewing existing ones in the portal may see management certificates listed as "Expired." Certificates are not actually expired. This is an error when rendering the visual information back to the user. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - Multi-Region

An alert for Microsoft Azure portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Portal - North Europe

We are currently investigating customer reports of latency with the Azure Portal in North Europe. More information will be provided when it is available.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

SUMMARY OF IMPACT: Between 10:30 and 12:34 UTC on 04 Aug 2017, a subset of customers using Visual Studio Team Services (VSTS) may have experienced '500 Internal Server Error' while accessing their VSTS portal, in addition to build failures. PRELIMINARY ROOT CAUSE: Engineers determined that a some recently upgraded network devices experienced a fault and that network traffic was not automatically rerouted. MITIGATION: Engineers took the faulty network devices out of rotation and rerouted network traffic to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. More information can be found on https://blogs.msdn.microsoft.com/vsoservice/?p=14545

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

An alert for Visual Studio Team Services is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - West US

Starting at 10:15 UTC on 03 Aug 2017 a subset of customers using SQL Database and SQL Data Warehouse in West US may experience intermittent issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Retries may be successful for some customers. Engineers have identified the underlying root cause and are in the process of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - West US

Starting at 10:15 UTC on 03 Aug 2017 a subset of customers using SQL Database and SQL Data Warehouse in West US may experience intermittent issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Retries may be successful for some customers. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - West US

An alert for SQL Database and SQL Data Warehouse in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - South Central US

Starting at 16:49 UTC on 28 Jul 2017 a subset of customers using Storage in South Central US may experience difficulties connecting to resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - South Central US

An alert for Storage in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Investigating | Service Management Operation failures in multiple regions.

Engineers are currently investigating alerts for service management operations in multiple regions. A subset of customers may receive failure notifications when performing service management operations - such as create, update, delete - with their resources. Additional information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, Redis Cache, Logic Apps, Azure Analysis Services, and Azure Resource Manager. Engineers have verified that a majority of impacted services are mitigated and are conducting final steps of validation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, and Redis Cache. Mitigation has been applied and our monitoring system has started showing recovery. Engineers continue to validate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, and Azure Search. Multiple engineering teams are engaged in multiple workflows to mitigate the impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Key Vault, and SQL Database. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

An alert for Network Infrastructure in UK South is being investigated. More information will be provided as it is known.

Last Update: A few months ago

App Service/Web apps - West Europe

Engineers are aware of a recent issue for App Service \ Web Apps in West Europe which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - Multiple regions

Starting at 09:23 UTC on 20 Jul 2017 a subset of customers using Storage may receive timeouts or failure notifications when performing service management operations - such as create, update, delete - for their storage resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - East US

Starting at 09:23 UTC on 20 Jul 2017, a subset of customers using Storage in multiple regions may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database -West Europe

Engineers are aware of a recent issue for SQL Database in West Europe which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SHD Banner - West Europe

An alert for SQL DB in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Service Map - East US

SUMMARY OF IMPACT: Between 15:41 and 23:40 UTC on 18 Jul 2017, customers using Service Map in East US may not have been able to see a list of virtual machines' data when selecting the service map tile. PRELIMINARY ROOT CAUSE: Engineers are still investigating for an underlying cause; however, engineers were seeing increased loads in backend systems. MITIGATION: Engineers re-routed traffic from the affected cluster to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

Service Map - East US

Starting at 15:41 UTC on 18 Jul 2017 customers using Service Map in East US may not see a list of virtual machines data when selecting the service map tile. Engineers have determined that customers can view the default window for a time frame of 30 minutes. However, customers can select to go over 2 hours allowing previous reported data to be visible. Engineers are still investigating for an underlying cause, however engineers are seeing increased loads in backend systems. Engineers are working towards reducing loads for possible mitigation. The next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage Storage may experience downstream impact. Engineers have found that a power disruption has resulted in an unexpected failure of power feeds to a single storage cluster and have now restored power to the cluster. Some customers should be seeing improvements in availability. The next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage Storage may experience downstream impact. Engineers have found that a power disruption has resulted in an unexpected failure of power feeds to a single storage cluster and have now restored power to the cluster. Some customers should be seeing improvements in availability. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Multiple Services - East US 2

Starting at 03:45 UTC on 15 Jul 2017 a subset of customers may experience login failures for the following services: SQL Database, Azure Database for MySQL, PostgreSQL, Azure Data Factory, Data Warehouse and Azure Site Recovery. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage Storage may experience downstream impact. Engineers have found that a power disruption has resulted in an unexpected failure of power feeds to a single storage cluster and are working to safely restore power to the cluster. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Multiple Services - East US 2

Starting at 05:04 UTC on 15 Jul 2017 a subset of customers may experience login failures for the following services: SQL Database, Azure Database for MySQL, PostgreSQL, Azure Data Factory, Data Warehouse and Azure Site Recovery. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage Storage may experience downstream impact. Engineers have found that a power disruption has resulted in unexpected failure of power feeds to a single storage cluster and have begun safe restoration methods to ensure data integrity as the cluster is brought back online. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Multiple Services - East US 2

Starting at 05:04 UTC on 15 Jul 2017 a subset of customers may receive failure notifications when performing service management operations or experience login failures for the following services: SQL Database, Azure Database for MySQL, PostgreSQL, Azure Data Factory, Data Warehouse and Azure Site Recovery. Engineers have identified a possible underlying cause and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2 | Investigating

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage Storage may experience downstream impact. Engineers are investigating a storage cluster which experienced a power supply issue and are taking steps to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage Storage may experience downstream impact. Engineers are investigating a storage cluster which experienced connectivity issues and are taking steps to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - East US 2 | Investigating

Starting at 05:04 UTC on 15 Jul 2017 a subset of customers using SQL Database in East US 2 may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region, in addition to login failures. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage storage may experience downstream impact. Engineers are investigating a backend networking device which is impacting a subset of Storage services. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

SQL Database - East US 2

Starting at 03:45 UTC on 15 Jul 2017 a subset of customers using SQL Database in East US 2 may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region, in addition to intermittent login failures. This is related to an underlying Virtual Machines issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Other Azure services that leverage storage may experience downstream impact. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2

Starting at 03:45 UTC on 15 Jul 2017 customers using Virtual Machines in East US 2 may experience connection failures when trying to access a subset of Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure Alert - North Central US

An alert for Network Infrastructure in North Central US and Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure Alert - North Central US

An alert for Network Infrastructure in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Intermittent Authentication Issues

Starting at approximately 15:20 UTC , a subset of customers may experience intermittent issues when attempting to authenticate to their Visual Studio Team Service accounts. Engineers have identified a backend service that is impacting requests from passing successfully. Engineers have implemented a possible fix and are taking additional extended mitigation steps. The following blog is also being actively updated and can be found here: https://aka.ms/vstsblog The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Intermittent Authentication Issues

Starting at approximately 15:20 UTC , a subset of customers may experience intermittent issues when attempting to authenticate to their Visual Studio Team Service accounts. Engineers are actively identifying appropriate steps to mitigate the issue. The following blog is also being actively updated and can be found here: https://aka.ms/vstsblog The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Intermittent Authentication Issues

Starting at approximately 15:20 UTC , a subset of customers may experience intermittent issues when attempting to authenticate to their Visual Studio Team Service accounts. Engineers are actively identifying appropriate steps to mitigate the issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Intermittent Authentication Issues

Starting at approximately 15:20 UTC , a subset of customers may experience intermittent difficulties when attempting to authenticate into their resources which are dependent on Azure Active Directory. A subset of customers attempting to authenticate to their Visual Studio Team Service accounts may also experience intermittent issues. Engineers are aware of this issue and are actively investigating appropriate mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Intermittent Authentication Issues

An alert for Azure Active Directory is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Backup Event Hubs Redis Cache Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources. SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Southeast Asia - Networking Issue Impacting Multiple Services

Starting at 16:21 UTC on 07 Jul 2017, customers in Southeast Asia may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers identified recent configuration changes to a networking device that impacted network traffic accessing resources in the region. Engineers are currently rolling back the configuration changes for mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments.  Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources.  SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Southeast Asia - Networking Issue Impacting Multiple Services

Starting at 16:21 UTC on 07 Jul 2017, customers in Southeast Asia may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Southeast Asia - Networking Issue Impacting Multiple Services

Starting at 16:21 UTC on 07 Jul 2017, customers in Southeast Asia may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Additional Impacted Services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customer using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache may receive intermittent time out notifications when accessing Redis Cache resources. SQL Database may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated.

Last Update: A few months ago

Southeast Asia - Networking Issue Impacting Multiple Services

Starting at 16:21 UTC on 07 Jul 2017, customers in Southeast Asia may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Southeast Asia Alert is Being Investigated

An alert for Virtual Machines and App Service in Southeast Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Southeast Asia Alert is Being Investigated

An alert for Southeast Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Active Directory - Germany Central and Germany Northeast

An alert for Azure Active Directory in Germany Central and Germany Northeast is being investigated. More information will be provided as it is known.

Last Update: A few months ago

VPN Gateway - Applying Mitigation

Starting at 10:30 UTC on 01 Jul 2017 a subset of customers using VPN Gateway may experience intermittent failures when connecting to, or via, their VPN Gateways. Retries should be successful. In cases where retries do not succeed, customers can perform a Gateway double-reset as a workaround. This should restore the Gateway connectivity in a short period of time. Please review our documentation for instructions regarding resets of VPN Gateways: https://aka.ms/d_40508413 Engineers have developed and are currently verifying a fix for this issue. The next update will be provided as events warrant.

Last Update: A few months ago

VPN Gateway - Applying Mitigation

Starting at 10:30 UTC on 01 Jul 2017 a subset of customers using VPN Gateway in different regions may experience intermittent failures when connecting to VPN resources hosted in these regions. Retries should be successful. As a workaround, customers may perform a Gateway reset, twice; this will mitigate the issue. Instructions regarding resets of VPN Gateways can be found in this blog: https://aka.ms/d_40508413. Engineers have developed and are currently verifying a fix for this issue. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

VPN Gateway - Under Investigation

Starting at 10:30 UTC on 01 Jul 2017 a subset of customers using VPN Gateway in different regions may experience intermittent failures when connecting to VPN resources hosted in these regions. Retries should be successful. As a workaround, customers may perform a Gateway reset, twice; this will mitigate the issue. Instructions regarding resets of VPN Gateways can be found in this blog: https://aka.ms/d_40508413. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

VPN Gateway - Multiple Regions

Starting at 10:30 UTC on 01 Jul 2017 a subset of customers using VPN Gateway in across different regions may experience intermittent difficulties connecting to VPN resources hosted in these regions. Retries should be successful. As a workaround, customers may perform a Gateway reset, twice. Instructions regarding resets of VPN Gateways can be found in this blog: https://aka.ms/d_40508413. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

VPN Gateway - Multiple Regions

An alert for VPN Gateway in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Log Analytics - East US

Starting at 09:00 UTC on 30 Jun 2017, a subset of customers using Log Analytics in East US may experience log processing delays with OMS workspaces hosted in this region. Engineers identified an unhealthy backend process impacting data ingestion as an underlying cause and have manually reconfigured the backend process. Engineers are currently monitoring the service health and identifying any additional steps needed for a full recovery of the service. Customers may begin to see new log processing without delays. The issue is impacting approximately 25% of customers in the region. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

Starting at 09:00 UTC on 30 Jun 2017, a subset of customers using Log Analytics in East US may experience log processing delays with OMS workspaces hosted in this region. Engineers identified an unhealthy backend process impacting data ingestion as an underlying cause. Engineers manually reconfigured the backend process for mitigation and currently validating whether the mitigation took effect. Customers may see new log processing without delays due to the mitigation. The issue impacted approximately 25% of customers in the region. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

Starting at 09:00 UTC on Jun 30 2017 a subset of customers using Log Analytics in East US may experience log processing delays with OMS workspaces hosted in this region. Engineers are aware of this issue and are actively implementing mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

An alert for Log Analytics in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines and HDInsight - West Europe

Starting at 08:00 UTC on 29 Jun 2017 a subset of customers using Virtual Machines in West Europe may receive error notifications when provisioning certain Virtual Machines series in this region. Starting Virtual Machines, which are currently in a "Stopped (Deallocated)” state may also return errors. Additionally, a subset of customers using HDInsight in West Europe may receive intermittent deployment failure notifications when creating new HDInsight clusters in this region. Engineers have identified a possible underlying cause, and are engaged in multiple workstreams to apply appropriate mitigation operations. The next update will be provided in 3 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 08:00 UTC on 29 Jun 2017 a subset of customers using Virtual Machines in West Europe may receive error notifications when provisioning certain Virtual Machines series in this region. Starting Virtual Machines, which are currently in a "Stopped (Deallocated)” state may also return errors. Engineers have identified a possible underlying cause, and are engaged in multiple workstreams to determine appropriate mitigation options. The next update will be provided in 3 hours, or as events warrant.

Last Update: A few months ago

HDInsight - West Europe

Starting at 08:00 UTC on 29 Jun 2017 a subset of customers using HDInsight in West Europe may receive intermittent deployment failure notifications when creating new HDInsight clusters in this region. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

HDInsight - West Europe

An alert for HDInsight in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 08:00 UTC on 29 Jun 2017 a subset of customers using Virtual Machines in West Europe may receive error notifications when provisioning certain Virtual Machines series in this region. Starting Virtual Machines, which are currently in a "Stopped (Deallocated)” state may also return errors. Engineers are aware of this issue and are actively investigating. The next update will be provided in 3 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

An alert for Virtual Machines in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

North Europe - Alert review completed

Engineers have reviewed a monitoring alert in North Europe. They have confirmed that all services are healthy and that a Microsoft Azure service incident did not occur. A review of customer reports indicates that a common customer internet service provider (ISP) may have experienced an issue. This message will be closed off in 10 minutes.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts in North Europe. Additional information will be provided shortly.

Last Update: A few months ago

Media Services \ Streaming - Possible streaming performance issues

Starting at approximately 14:19 UTC on 27 Jun 2017, a subset of customers using Media Services \ Streaming in West US and East US may experience degraded performance when streaming live and on-demand media content. Channel operations may also experience latency or failures. Engineers have developed and are currently verifying a fix for this issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Media Services \ Streaming - Possible streaming performance issues

Starting at approximately 14:19 UTC on 27 Jun 2017, a subset of customers using Media Services \ Streaming in West US and East US may experience degraded performance when streaming live and on-demand media content. Channel operations may also experience latency or failures. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Media Services \ Streaming - Possible streaming performance issues

An alert for Media Services \ Streaming in East US and West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - Korea Central

An alert for Network Infrastructure in Korea Central is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - Korea Central

Engineers are aware of a recent issue for Network Infrastructure in Korea Central which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Networking issue with impact to additional services

Engineers are aware of a recent issue which has now been mitigated. Customers may have experienced intermittent connection issues with their resources. Additional information will be provided shortly.

Last Update: A few months ago

Multiple Services - Non-Regional

Engineers are investigating alerts for multiple services. Customers may experience intermittent connection issues with their resources. Additional information will be provided as it is known.

Last Update: A few months ago

App Service \ Web Apps - West US 2

Starting at 06:12 UTC on 23 Jun 2017 a subset of customers using App Service \ Web Apps in West US 2 may experience difficulties connecting to resources hosted in this region. Some customers may experience 502 and 503 errors. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - West US 2

An alert for App Service \ Web Apps in West US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - UK South

Starting at 08:02 UTC on 20 Jun 2017, a subset of customers using Virtual Machines in UK South may receive creation failure notifications when provisioning new DSv2-series and F-series Virtual Machines in this region. Starting DSv2-series and F-series Virtual Machines from “Stopped (Deallocated)” status with the same SKUs may result in errors. Existing DSv2-series, F-Series and other series of Virtual Machines are not impacted. Engineers are continuing to work on adding additional resources to the region and the extended mitigation phase is ongoing. The next update will be provided as more information is available.

Last Update: A few months ago

Virtual Machines - UK South

Starting at 08:02 UTC on 20 Jun 2017, a subset of customers using Virtual Machines in UK South may receive creation failure notifications when provisioning new DSv2-series and F-series Virtual Machines in this region. Starting DSv2-series and F-series Virtual Machines from “Stopped (Deallocated)” status with the same SKUs may result in errors. Existing DSv2-series, F-Series and other series of Virtual Machines are not impacted. Engineers are continuing to work on adding additional resources to the region and the extended mitigation phase is ongoing. The next update will be provided as more information is available.

Last Update: A few months ago

Virtual Machines - UK South

Starting at 08:02 UTC on 20 Jun 2017, a subset of customers using Virtual Machines in UK South may receive creation failure notifications when provisioning new DSv2-series and F-series Virtual Machines in this region. Starting DSv2-series and F-series Virtual Machines from “Stopped (Deallocated)” status with the same SKUs may result in errors. Existing DSv2-series, F-Series and other series of Virtual Machines are not impacted. Engineers have identified multiple workstreams for increasing additional resources in the region and have entered an extended mitigation phase. The next update will be provided as more information is available.

Last Update: A few months ago

Virtual Machines - UK South

Starting at 08:02 UTC on 20 Jun 2017, a subset of customers using Virtual Machines in UK South may receive creation failure notifications when provisioning new DSv2-series and F-series Virtual Machines in this region. Starting DSv2-series and F-series Virtual Machines from “Stopped (Deallocated)” status with the same SKUs may also result in errors. Existing DSv2-series, F-Series and other series of Virtual Machines are not impacted. Engineers are currently engaged in executing possible mitigation operations and increasing additional resources in the region. The next update will be provided as more information is available.

Last Update: A few months ago

Virtual Machines - UK South

Starting at 08:02 UTC on 20 Jun 2017, a subset of customers using Virtual Machines in UK South may receive creation failure notifications when provisioning new D-Series V2 and F-Series Virtual Machines in this region. Starting D-Series V2 and F-Series Virtual Machines from “Stopped (Deallocated)” status with the same SKUs may also receive errors. Existing D-Series V2, F-Series and other series of Virtual Machines are not impacted. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided as more information is available.

Last Update: A few months ago

Visual Studio Application Insights - Data Access Issues

Between 22:59 and 23:59 UTC on 16 Jun 2017, customers using Application Insights in East US, North Europe, South Central US and West Europe may have experienced Data Access Issues in the Azure and Application Analytics portal. The following data types were affected: Availability, Customer Event, Dependency, Exception, Metric, Page Load, Page View, Performance Counter, Request, Trace. Please refer to the Application Insights Status Blog for more information: https://aka.ms/aistatusblog . PRELIMINARY ROOT CAUSE: Engineers identified a backend configuration error as the potential root cause. MITIGATION: Engineers manually reconfigured the backend service to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

Visual Studio Application Insights - Data Access Issues

Starting at 22:59 UTC on 16 Jun 2017, customers using Application Insights may experience Data Access Issues in the Azure and Application Analytics portal. The following data types are affected: Availability,Customer Event,Dependency,Exception,Metric,Page Load,Page View,Performance Counter,Request,Trace. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Application Insights - Data Access Issues

An alert for Visual Studio Application Insights is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure Impacting Multiple Services - Australia East

SUMMARY OF IMPACT: Between 01:50 and 02:38 UTC on 11 Jun 2017, a Network Infrastructure issue occurred in Australia East. Customers may have experienced degraded performance, network drops, or time outs when accessing their Azure resources hosted in this region. Engineers have confirmed that customers using Virtual Machines, App Services, Web Apps, Mobile Apps, API Apps, Backup, Site Recovery, Azure Search, Redis Cache, Stream Analytics, and Media Services in Australia East were impacted. A subset of services may have encountered a delayed mitigation, all services are confirmed to be mitigated at this point. PRELIMINARY ROOT CAUSE: Engineers determined that a deployment resulted in Virtual IP ranges being incorrectly advertised. MITIGATION: Engineers disabled route advertisements on the newly deployed instances that were incorrectly programmed. NEXT STEPS: A comprehensive root cause analysis report will be published in approximately 72 hours

Last Update: A few months ago

Network Infrastructure Impacting Multiple Services - Australia East

Starting at 01:39 UTC on 11 Jun 2017 monitoring alerts were triggered for Network Infrastructure in Australia East. Customers may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region, however, engineers are beginning to see signs of mitigation. Engineers have determined that this is caused by an underlying Network Infrastructure event in this region which is currently under investigation. Engineers have confirmed that customers using Virtual Machines, Web Apps, Mobile Apps, API Apps, Backup, Site Recovery, Azure Search, Redis Cache, Stream Analytics, and Media Services in Australia East may be experiencing impact. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Azure Services - Australia East

An alert for Virtual Machines, App Services, Backup, Site Recovery, Azure Search, and Media Services in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - Australia East

An alert for Virtual Machines, App Services, Backup, Site Recovery, Azure Search, and Media Services in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - Australia East

An alert for Virtual Machines in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 22:50 UTC on 09 Jun 2017, customers using Log Analytics in West Europe may experience difficulties when trying to login to the OMS Portal (https://mms.microsoft.com) or when connecting to resources hosted in this region. The OMS Solutions within West Europe workspaces may fail to properly load and display data. This includes OMS Solutions for Service Map, Insight & Analytics, Security & Compliance, Protection & Recovery, and Automation & Control offerings for OMS. Engineers have identified the preliminary root cause and are currently implementing mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 22:50 UTC on 09 Jun 2017, customers using Log Analytics in West Europe may experience difficulties when trying to login to the OMS Portal or when connecting to resources hosted in this region. Tiles and blades may also fail to properly load and display data. Engineers are engaged and actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 22:50 UTC on 09 Jun 2017 a subset of customers using Log Analytics in West Europe may experience difficulties when trying to sign in to the Azure portal or connecting to resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

An alert for Log Analytics in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Automation - East US 2

Starting at 11:00 UTC on 07 Jun 2017 a limited subset of customers using Automation in East US 2 may experience intermittent issues viewing accounts, schedules, assets or starting jobs in the Azure Management Portal. Submitted start operations will see a delay but will eventually process and should not be resubmitted. This issue may impact up to 3% of customers in the region. Engineers are actively investigating to isolate the underlying cause and applying mitigation to backend databases. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Automation - East US 2

Starting at 11:00 UTC on 07 Jun 2017 a limited subset of customers using Automation in East US 2 may experience intermittent issues viewing accounts, schedules, or assets or starting jobs in the Azure Management Portal. Submitted start operations will see a delay but will eventually process and should not be resubmitted. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Amsterdam

Starting at 12:00 UTC on 01 Jun 2017, customers using ExpressRoute Circuits created in Amsterdam may experience difficulties with public peering when connecting to resources hosted in this region. Customers using the standard configuration as recommended (i.e. ExpressRoute in active-active configuration or even active-passive configuration) should have seen recovery earlier today. We are currently working to mitigate the remainder of those impacted. At this stage engineers have moved customers using premium circuits to a healthy network device. Engineers are still actively investigating a network device as a potential underlying cause and are reloading the device in an effort to mitigate the issue. The reload process will cause a few minutes of connection unavailability to resources for all domain routing for some customers. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Amsterdam

Starting at 12:00 UTC on 01 Jun 2017, customers using ExpressRoute Circuits created in Amsterdam may experience difficulties with public peering when connecting to resources hosted in this region. Customers using the standard configuration as recommended (i.e. ExpressRoute in active-active configuration or even active-passive configuration) should have seen recovery earlier today. We are currently working to mitigate the remainder of those impacted. Engineers are still actively investigating a network device as a potential underlying cause. At this stage engineers have moved customers using premium circuits to a healthy network device and are continuing their mitigation steps on the unhealthy device with a network device partner. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Amsterdam

Starting at 12:00 UTC on 01 Jun 2017, customers using ExpressRoute Circuits created in Amsterdam may experience difficulties with public peering when connecting to resources hosted in this region. Customers using the standard configuration as recommended (i.e. ExpressRoute in active-active configuration or even active-passive configuration) will have seen recovery earlier today. We are currently working to mitigate the remainder of those impacted. Engineers are still actively investigating a network device as a potential underlying cause. At this stage engineers have moved a subset of customers to a healthy network device and are continuing their mitigation steps on the unhealthy device with a network device partner. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Amsterdam

Starting at 12:00 UTC on 01 Jun 2017, customers using ExpressRoute Circuits created in Amsterdam may experience difficulties with public peering when connecting to resources hosted in this region. Customers using the standard configuration as recommended (i.e. ExpressRoute in active-active configuration or even active-passive configuration) will have seen recovery earlier today. We are currently working to mitigate the remainder of those impacted.  Engineers are still actively investigating a network device as a potential underlying cause. As the investigation continues, engineers are working with a network device partner to apply network configuration changes to circuits for mitigation. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - West Europe and North Europe

Starting at 12:00 UTC on 01 Jun 2017, customers using ExpressRoute Circuits in West Europe and North Europe may experience difficulties with public peering and connecting to resources hosted in this region. Customers using the standard configuration as recommended (i.e. ExpressRoute in active-active configuration or even active-passive configuration) will have seen recovery earlier today. We are currently working to mitigate the remainder of those impacted.  Engineers are still actively investigating a network device as a potential underlying cause. As the investigation continues, engineers are working with a third party networking team to apply network configuration changes to circuits for mitigation. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Multi-Region

Starting at 12:00 UTC on 01 Jun 2017 customer using ExpressRoute Circuits in West Europe and North Europe may experience difficulties with public peering and connecting to resources hosted in this region. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 2 hour, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Multi-Region

Starting at 12:00 UTC on 01 Jun 2017, customers using ExpressRoute Circuits in West Europe or North Europe may experience difficulties with public peering and connecting to resources hosted in this region. Engineers are actively working on mitigating the issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

ExpressRoute \ ExpressRoute Circuits - Multi-Region

Starting at 12:00 UTC on 01 Jun 2017, customer using ExpressRoute Circuits in on West Europe or North Europe who may experience difficulties with public peering and connecting to resources hosted in this region. Engineers are actively working on mitigating the issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox and a few other clients. An outdated OCSP response that was cached on Azure services caused failures for customers using these clients. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected. The majority of customers will notice improvements access Azure services using Mozilla Firefox and retry should succeed. This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric, Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox and a few other clients. An outdated OCSP response that was cached on Azure services caused failures for customers using these clients. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected. Majority of customers may notice improvements access Azure services using Mozilla Firefox and retry should succeed. This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric, Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Service Management Operartions failures - East US and West US - Mitigated

Engineers have mitigated the service management operations issue on multiple Azure services in East US and West US. Detailed resolution summary will be provided as we gather more information.

Last Update: A few months ago

Service Management Operartions failures - East US and West US

An alert for service management operations failures on multiple Azure services in East US and West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SHD Banner - East US and West US

An alert for service management operations failures on multiple Azure sevices in East US and West US is being investigated. Engineers are seeing improvement already, and more information will be provided as it is known.

Last Update: A few months ago

App Service \ Web Apps - West Europe

An alert for App Service \ Web Apps in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have the same implementation of ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected. This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric, Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have the same implementation of ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected. This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric and Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have the same implementation of ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected. This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric and Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have the same implementation of ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected. This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric and Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have the same implementation of ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have taken steps to refresh server side cache for most Azure services that were affected.  This includes Azure management portal (https://portal.azure.com), Web Apps, Azure Data Lake Analytics, Azure Data Lake Store, Visual Studio Team Services, Azure Service Fabric and Service Bus, and a subset of Storage. In many cases customers would not have experienced impact to the aforementioned services. Engineers are continuing to deploy a mitigation for remaining impacted customers. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Azure services through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have identified the underlying issue and are currently exploring further options to mitigate the issue for customers using Firefox. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Microsoft Azure portal through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox or other clients which have ‘OCSP stapling’ enabled. An invalid OCSP signing certificate that has been cached is causing failures for a subset of customers. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have identified the underlying issue and are currently exploring further options to mitigate the issue for customers using Firefox. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Microsoft Azure portal through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers accessing Azure services using Mozilla Firefox. Customers can now use the Firefox browser to access the Azure management portal (https://portal.azure.com) and engineers are validating that all features within the management portal are performing as expected. As a workaround, customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have identified the underlying issue and are currently implementing an update to mitigate the Firefox issues. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Issue with accessing Microsoft Azure portal through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers leveraging Mozilla Firefox to access and utilize the Microsoft Azure portal as well as other Azure services. As a workaround customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have identified the underlying issue and are currently implementing an update to mitigate the Firefox issues. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

IoT Hub - Emerging issue under investigation

SUMMARY OF IMPACT: Between 00:00 UTC on 26 May 2017 and 16:30 UTC on 29 May 2017, a subset of customers using Azure IoT Hub may have received the error message "Cannot read property 'value' of undefined or null reference" when trying to deploy IoT Hub resources. This issue only affected new subscriptions attempting to deploy their first IoT Hub - existing IoT Hubs were not affected. Customers were able to deploy IoT Hubs using Azure Resource Manager (ARM) templates as a workaround. PRELIMINARY ROOT CAUSE: Engineers determined that a recent deployment introduced a new software task that was not properly validating new subscriptions. MITIGATION: Engineers deployed a platform hotfix to bypass this software task and mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

Issue with accessing Microsoft Azure portal through Mozilla Firefox

Engineers are continuing to work to resolve issues for customers leveraging Mozilla Firefox to access and utilise the Microsoft Azure portal. As a workaround customers can use an alternative browser - Internet Explorer, Edge, Safari or Chrome. Retries may be successful for some Firefox users. Engineers have identified the underlying issue and are currently implementing an update to mitigate the Firefox issues. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

IoT Hub - Emerging issue under investigation

Starting at 07:19 UTC on 28 May 2017 a subset of customers using Azure IoT Hub may receive the error message "Cannot read property 'value' of undefined or null reference" when trying to deploy IoT Hub resources. Existing IoT Hubs are not affected. As a workaround, customers can deploy via Azure Resource Manager (ARM) templates, as this may result in successful retries. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

IoT Hub - Emerging issue under investigation

An alert for Azure IoT Hub is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Emerging issue under investigation

Engineers are continuing to work to resolve issues for customers leveraging Mozilla Firefox to access the Microsoft Azure portal. As a workaround customers can use an alternative browser - Internet Explorer, Edge or Chrome. Retries may be successful for some Firefox users. Engineers have identified the underlying issue and are currently implementing an update to mitigate the Firefox issues. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts for suspected accessibility issues to the Microsoft Azure portal. Impact is limited to customers utilizing Mozilla Firefox. As a workaround customers can use an alternative browser - Internet Explorer, Edge or Chrome. Engineers are aware of this issue and are actively investigating. The next update will be provided as more information is available, or as events warrant.

Last Update: A few months ago

Multiple Services - Central Canada

SUMMARY OF IMPACT: Between 16:47 and 17:20 UTC on 28 May 2017, a subset of customers in Canada Central may have intermittently experienced degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region. PRELIMINARY ROOT CAUSE: Monitoring alerted engineers of network flapping through one network device. MITIGATION: Engineering teams immediately removed the router from rotation and allowed traffic to failover to healthy routes. Once it was established that the network traffic flapping had stopped, engineers brought the removed device back into rotation. NEXT STEPS: Engineers will continue to monitor the health of traffic in the region and work with partners to understand the cause of the packet drops.

Last Update: A few months ago

App Service \ Web Apps and Functions - Continuous Deployment Issues Using VSTS

Starting at 23:07 UTC on 25 May 2017 customers using App Service \ Web Apps and Functions may receive failure notifications in the Azure Portal when submitting continuous deployments using Visual Studio Team Services. Existing Web Apps and Function Apps are not currently impacted. Customers are able to use MSDeploy as a workaround. Engineers identified a recent deployment containing a software error with a backend supporting service as an underlying cause and are currently issuing a hotfix for mitigation. More information can be found on https://aka.ms/vstsblog The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps and Functions - Continuous Deployment Issues Using VSTS

Starting at 23:07 UTC on 25 May 2017 customers using App Service \ Web Apps and Functions may receive failure notifications in the Azure Portal when submitting continuous deployments using Visual Studio Team Services. Existing Web Apps and Function Apps are not currently impacted. Engineers identified a recent deployment as an underlying cause and are currently issuing a hotfix for mitigation. More information can be found on https://aka.ms/vstsblog The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps and Functions - Continuous Deployment Issues Using VSTS

An alert for App Service \ Web Apps and Functions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Virtual Machines in West Europe may experience high latency or degraded performance when accessing resources hosted in this region. This is related to an ongoing Storage latency issue in the region. Engineers have determined that an underlying network issue is impacting communication with a subset of storage scale units. They are continuing to implement a mitigation plan and monitor service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Latency - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers have determined that an underlying network issue is impacting communication with a subset of storage scale units. They are continuing to implement a mitigation plan and monitor service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Virtual Machines in West Europe may experience high latency or degraded performance when accessing resources hosted in this region. This is related to an ongoing Storage latency issue in the region. Engineers are implementing a mitigation plan and monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Latency - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers have determined that an underlying network issue is impacting communication with a subset of storage scale units and are implementing a mitigation plan and monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Virtual Machines or HDInsight in West Europe may experience high latency or degraded performance when accessing Virtual Machines hosted in this region. Customers using HDInsight may receive deployment failure notifications when creating new HDInsight clusters in this region. This is related to an ongoing Storage latency issue in the region. Engineers are implementing a mitigation plan and monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Latency - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers are implementing a mitigation plan and monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search, Virtual Machines or HDInsight in West Europe may experience high latency or degraded performance when accessing the Azure Search services or Virtual Machines hosted in this region. Customers using HDInsight may receive deployment failure notifications when creating new HDInsight clusters in this region. This is related to an ongoing Storage issue in the region. Engineers have identified a possible underlying cause and are working to implement mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers have identified a possible underlying cause and are evaluating mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search, Virtual Machines or HDInsight in West Europe may experience high latency or degraded performance when accessing the Azure Search services or Virtual Machines hosted in this region. Customers using HDInsight may receive deployment failure notifications when creating new HDInsight clusters in this region. This is related to an ongoing Storage issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search or Virtual Machines in West Europe may experience high latency or degraded performance when accessing the Azure Search services or Virtual Machines hosted in this region. This is related to an ongoing Storage issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search in West Europe may experience high latency when accessing the Azure Search service hosted in this region. This is related to an ongoing Storage issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 05:00 UTC on 18 May 2017 a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

An alert for Storage in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Recovery - Service Bus

Starting at 21:30 UTC on 16 May 2017, we are aware of an incident with Service Bus. Engineers are currently validating mitigation and some customers have reported recovery at this time. This issue is only impacting new connection to Service Bus, existing connections are not affected. We recommend customers to restart your processes to complete recovery of the service.

Last Update: A few months ago

Service Bus

An alert for Service Bus is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Potential Network Infrastructure impacted multiple services

Engineers are aware of a recent Network Infrastructure issue in East US which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Potential Network Infrastructure impacted multiple services

Engineers are aware of a recent Network Infrastructure issue in East US which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Potential Network Infrastructure impacted multiple services

Engineers are aware of a recent issue for SHD Banner in East US which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Potential Network Infrastructure impacted multiple services

Engineers are aware of a recent issue which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Potential Network Infrastructure impacted multiple services

Engineers are aware of a recent issue which has now been mitigated. More information will be provided in 60 minutes or as events warrant .

Last Update: A few months ago

Potential Network Infrastructure impacted multiple services

Engineers are aware of a recent issue for SHD Banner in Brazil South which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe

Starting at approximately 09:07 UTC on 11 May 2017, a subset of customers in North Europe may receive intermittent allocation failure notifications when attempting to provision new D-series V2 Virtual Machines. Customers with existing D-Series V2 Virtual Machines may also experience failures when attempting to perform scaling, or resizing operations. This issue is not impacting availability of existing Virtual Machines. Engineers have completed some mitigation steps and continue to validate mitigation. Customers should see some signs of recovery. The next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe

Starting at approximately 09:07 UTC on 11 May 2017, a subset of customers in North Europe may receive intermittent allocation failure notifications when attempting to provision new D-series V2 Virtual Machines. Customers with existing D-Series V2 Virtual Machines may also experience failures when attempting to perform scaling, or resizing operations. This issue is not impacting availability of existing Virtual Machines. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe

Starting at approximately 09:07 UTC on 11 May 2017, a subset of customers in North Europe may receive intermittent allocation failure notifications when attempting to provision new D-series V2 Virtual Machines. Customers with existing D-Series V2 Virtual Machines may also experience failures when attempting to perform scaling, or resizing operations. This issue is not impacting availability of existing Virtual Machines. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe

Starting at approximately 09:07 UTC on 11 May 2017, a subset of customers in North Europe may receive intermittent allocation failure notifications when attempting to provision new D-series V2 Virtual Machines. Customers with existing D-Series V2 Virtual Machines may also experience failures when attempting to perform scaling, or resizing operations. This issue is not impacting availability of existing Virtual Machines. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe

Starting at approximately 09:07 UTC on 11 May 2012, a subset of customers in North Europe may receive intermittent allocation failure notifications when attempting to provision new D-series V2 Virtual Machines. Customers with existing D-Series V2 Virtual Machines may also experience failures when attempting to perform scaling, or resizing operations. This issue is not impacting availability of existing Virtual Machines. Engineers have identified a possible underlying cause and are working to determine mitigation options. The next update will be provided in 60 minutes or as events warrant

Last Update: A few months ago

Virtual Machines - North Europe

An alert for Virtual Machines in North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Emerging issue under investigation

Engineers are aware of a recent issue for Azure Active Directory which has now been mitigated. More information will be provided shortly or as events warrant.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts for issues logging into the Azure portal. Additional information will be provided shortly.

Last Update: A few months ago

Virtual Machines - Japan East

SUMMARY OF IMPACT: Between 20:55 and 21:35 UTC on 04 May 2017, a subset of customers using Virtual Machines in Japan East may have experienced intermittent connection failures when trying to access Virtual Machines in the region. Some Virtual Machines may have also restarted unexpectedly. PRELIMINARY ROOT CAUSE: At this stage, engineers do not have a definitive root cause. MITIGATION: The issue was self-healed by the Azure platform. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. A small subset of Virtual Machines may not have recovered successfully and customers will receive targeted messaging for further steps via their portal (https://portal.azure.com).

Last Update: A few months ago

Virtual Machines - Japan East

Engineers are aware of a recent issue for Virtual Machines in Japan East which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Backup - Failure notifications when performing Backup operations

Starting at 10:00 UTC on 02 May 2017, customers using Backup services may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Customers attempting to access these resources may encounter the following error message: "VM Agent is unable to communicate with the Azure Backup Service." Engineers have identified a potential root cause and continue to implement a fix in order to mitigate the issue. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Backup - Failure notifications when performing Backup operations

Starting at 10:00 UTC on 02 May 2017, customers using Backup services may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Customers attempting to access these resources may encounter the following error message: "VM Agent is unable to communicate with the Azure Backup Service." Engineers have identified a fix for the potential root cause and are currently exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Backup - Multi-Region

Starting at 10:00 UTC on 02 May 2017, customers using Backup in North Europe, West Europe and South Central US may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Customers attempting to access these resources may encounter the following error message: "VM Agent is unable to communicate with the Azure Backup Service." Engineers have identified a fix for the potential root cause and are currently exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Backup - Multi-Region

Starting at 10:00 UTC on 02 May 2017, customers using Backup in North Europe, West Europe and South Central US may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Customers attempting to access these resources may encounter the following error message: "VM Agent is unable to communicate with the Azure Backup Service." Engineers have identified a potential root cause and are actively pursuing mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Backup - Multi-Region

Starting at 10:00 UTC on 02 May 2017, customers using Backup in North Europe, West Europe and South Central US may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Customers attempting to access resources may encounter the following error message: "VM Agent is unable to communicate with the Azure Backup Service." Engineers have identified a potential root cause and are investigating mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Backup - Multi-Region

Starting at 10:00 UTC on 02 May 2017, customers using Backup in North Europe, West Europe and South Central US may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Customers attempting to access resources may encounter the following error message: "VM Agent is unable to communicate with the Azure Backup Service." Engineers are investigating the underlying root cause to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Backup - Multi-Region

Starting at 10:00 UTC on 02 May 2017, customers using Backup in North Europe, West Europe and South Central US may experience difficulties when creating new and running scheduled Virtual Machine backup jobs in these regions. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Backup - North Europe

Starting at 10:00 UTC on 02 May 2017, customers using Backup in North Europe, West Europe and South Central US may experience difficulties creating new Virtual Machine backup requests in these regions. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Backup - North Europe

An alert for Backup in North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Iot Suite - Failures Provisioning New Solutions - Germany

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers are currently deploying a fix. Customers may begin to see improvements as the fix is deployed. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

IoT Suite - Failures Provisioning New Solutions

SUMMARY OF IMPACT: Between 07:15 on 08 Apr 2017 and 00:00 UTC on 11 Apr 2017, customers using Azure IoT Suite may have been unable to provision solutions. Engineers recommended deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources were not impacted. PRELIMINARY ROOT CAUSE: Engineers identified a recent change to backend systems as the preliminary root cause. MITIGATION: Engineers deployed a platform hotfix to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

IoT Suite - Failures Provisioning New Solutions

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers have developed and are currently verifying a fix for this issue. Customers may begin to see improvements as the fix is deployed. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Iot Suite - Failures Provisioning New Solutions - Germany

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers have developed and are currently verifying a fix for this issue. Customers may begin to see improvements as the fix is deployed. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

IoT Suite - Failures Provisioning New Solutions

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers have identified a recent change to backend systems as the possible root cause and are working to develop steps for mitigation. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Iot Suite - Failures Provisioning New Solutions

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers have identified a recent change to backend systems as the possible root cause and are working to develop steps for mitigation. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

IoT Suite - Failures Provisioning New Solutions

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers are working with additional teams to assist with the underlying cause investigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Iot Suite - Failures Provisioning New Solutions

Starting at 07:15 UTC on 08 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers are working with additional teams to assist with the underlying cause investigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

IoT Suite - Failures Provisioning New Solutions

Starting at 07:15 UTC on 09 Apr 2017 customers using Azure IoT Suite may not be able to provision solutions. Engineers recommend deploying from an MSbuild prompt using code at https://aka.ms/rms_git. Existing resources are not impacted. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Services - North Europe

Starting at 17:45 UTC on 06 Apr 2017 a subset of customers using App Service in North Europe may receive HTTP 500 errors or experience high latency when accessing App Service deployments hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - North Europe

Starting at approximately 10:20 UTC on 06 Apr 2017, a limited number of customers using App Service \ Web Apps in North Europe may receive HTTP 5xx errors, timeouts or experience high latency when accessing Web Apps deployments hosted in this region. Engineers are currently deploying a hotfix to mitigate the issue and restore the health of a back end service responsible for servicing incoming requests. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - North Europe

Starting at approximately 10:20 UTC on 06 Apr 2017, a limited number of customers using App Service \ Web Apps in North Europe may receive HTTP 5xx errors, timeouts or experience high latency when accessing Web Apps deployments hosted in this region. Engineers are exploring mitigation options to restore the health of a back end service responsible for servicing incoming requests. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - North Europe

Starting at approximately 10:20 UTC on 06 Apr 2017, a subset of customers using App Service \ Web Apps in North Europe may receive HTTP 5xx errors, timeouts or experience high latency when accessing Web Apps deployments hosted in this region. Engineers are aware of this issue and are actively investigating the health of a back-end service. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps - North Europe

Starting at approximately 10:20 UTC on 06 Apr 2017, a subset of customers using App Service \ Web Apps in North Europe may receive HTTP 5xx errors, timeouts or experience high latency when accessing Web Apps deployments hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Cloud Services and Virtual Machines - East US 2

Starting at 14:00 UTC on 31 Mar 2017, a subset of customers using Virtual Machines or Cloud Services in East US 2 may experience higher than expected latency or degraded performance when trying to access their resources. Engineers have identified the underlying root cause and have set a mitigation path. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Cloud Services and Virtual Machines - East US 2

Starting at 14:00 UTC on 30 Mar 2017, a subset of customers using Virtual Machines or Cloud Services in East US 2 may experience higher than expected latency or degraded performance when trying to access their resources. Engineers are actively investigating an issue with storage accounts as the underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Cloud Services and Virtual Machines - East US 2

Starting at 14:00 UTC on 30 Mar 2017, a subset of customers using Virtual Machines or Cloud Services in East US 2 may experience higher than expected latency or degraded performance when trying to access their resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Cooling Event | Japan East | Validating Mitigation

SUMMARY OF IMPACT: Between 13:50 and 21:00 UTC on 31 Mar 2017, a subset of customers in Japan East may have experienced difficulties connecting to their resources hosted in this region. Customers using the following services may have experienced impact: Storage Virtual Machines API Management Web Apps Automation Backup Cloud Services Azure Container Service Data Movement DocumentDB Event Hubs HDInsight IoT Hub Key Vault Logic Apps Media Services Azure Monitor Redis Cache Service Bus Site Recovery StorSimple Stream Analytics Azure Machine Learning Azure Notification Hub Access Control Service PRELIMINARY ROOT CAUSE: Engineers have identified the underlying root cause as loss of cooling  causing certain Storage and Compute scale units to perform an automated shut down to preserve data integrity & resilience. This affected a number of services with dependencies on these scale units.  MITIGATION: Engineers restored cooling, restarted the affected scale units, verified hardware recovery, and verified recovery for data plane and control plane operations for all the affected services. NEXT STEPS: Customers still experiencing impact will receive communications in their management portals. An internal investigation will be conducted and this post will be updated in approximately 48 - 72 hours.

Last Update: A few months ago

Virtual Machines - East US 2

An alert for Virtual Machines and Cloud Services in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Cooling Event | Japan East | Validating Mitigation

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity & resilience. Engineers have recovered the cooling units, and the affected resources and are in the final stages of verifying recovery. Customers who are still experiencing impact will be communicated to separately via their management portals. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

SUMMARY OF IMPACT: Between 13:50 and 19:50 UTC on 31 Mar 2017, a subset of customers in Japan East experienced Virtual Machine reboots, degraded performance, or connection failures when accessing their Azure resources hosted in Japan West region. Engineers have confirmed the following services are healthy: Redis Cache, Service Bus, Azure SQL Database, Event Hubs, Automation, Steam Analytics, Document DB, Data Factory / Data Movement, Azure Monitor, Media Services, Logic Apps, Azure IoT Hub, API Management, Azure Resource Manager, Azure Machine Learning. NEXT STEPS: A detailed root cause analysis report will be provided in approximately 72 hours

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service/WebApps , Virtual Machines, Cloud Services, Backup, Storsimple, Site Recovery, Key Vault, and HDInsight. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Cooling Event | Japan East | Validating Mitigation

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity & resilience. Engineers have recovered the cooling units, most of the affected resources, and are continuing to recover the rest of affected resources. Some services are performing final checks before declaring mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, Azure IoT Hub, Azure Monitor, and Azure Automation. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Cooling Event | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity & resilience. Engineers have recovered the cooling units and are working on recovering the affected resources. Engineers will then validate control plane and data plane availability for all affected services. Some customers may see signs of recovery. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, Azure IoT Hub, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Infrastructure event | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as an infrastructure alert which caused some resources to undergo an automated shutdown to ensure data integrity & resilience. Engineers have mitigated the infrastructure alert, and are currently undertaking the structured restart sequence for any impacted resources. Some of the previously impacted resources are now reporting as healthy and impacted customers may see early signs of recovery. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, and HDInsight. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, and HDInsight. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines - Japan East

Starting at 13:50 UTC on 31 Mar 2017 a subset of customers using storage services in Japan East may experience difficulties connecting to their resources hosted in this region. Other Azure services that leverage storage in this region may also be experiencing impact, and these are detailed in the post below. Engineers have identified the underlying cause as an infrastructure alert which caused some storage resources to undergo an automated shutdown to ensure data integrity & resilience. Engineers have mitigated the infrastructure alert, and are currently undertaking the structured restart sequence for any impacted resources. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted service include App Service\WebApps , Virtual Machines, Azure SQL DB, Windows Azure Cache, Service Bus, Cloud Services. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines - Japan East

Starting at 13:50 UTC on 31 Mar 2017 a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have identified a potential root cause and this is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines - Japan East

Starting at 06:50 UTC on 31 Mar 2017 a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have identified a potential root cause and this is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines- Japan East

Engineers are investigating alerts in Japan East for Storage and Virtual machines. Additional information will be provided shortly.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

SUMMARY OF IMPACT: Between 18:04 and 21:16 UTC on 27 Mar 2017, a subset of customers in Japan West may have experienced degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. PRELIMINARY ROOT CAUSE: A storage scale unit being added to the Japan West region announced routes that blocked some network connectivity between two datacenters in the region. VMs and services dependent on that connectivity would have experienced restarts or failed connections. Unfortunately, automated recovery did not mitigate the issue. The manual health checks that are conducted around all new cluster additions were performed, but did not detect a problem. This led to a delay in correct root cause analysis and mitigation. MITIGATION: Engineers isolated the newly deployed scale unit, which mitigated the issue. NEXT STEPS: Investigations are currently in progress to determine exactly how incorrect routing information was configured into the storage scale unit being added and how that incorrect information escaped the many layers of validations designed to prevent such issues. A full detailed Root Cause Analysis will be published approximately in 72 hours.

Last Update: A few months ago

Multiple Azure Services Impacted by Underlying Network Infrastructure Issue - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017 a subset of customers in Japan West may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. Impacted services are listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. Engineers are continuing their investigating for an underlying cause and applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, due to the networking infrastructure issue the following services are impacted: App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts. Engineers are continuing their investigating for an underlying cause and applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. Engineers are continuing their investigating for an underlying cause and have begun applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West

Starting at 18:04 UTC on 27 Mar 2017 a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience high latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Engineers are investigating the issue for an underlying cause and working on mitigation paths. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West

Starting at 18:04 UTC on 27 Mar 2017 customers in Japan West may experience difficulties connecting to resources hosted in this region. Engineers confirmed impact to Redis Cache, Azure Search, Azure Monitor, and App Service / Web Apps. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan West

An alert for Storage in Japan West is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Investigating

Starting at 06:08 UTC on 27 Mar 2017 a subset of customers using Visual Studio Team Services in the Ibiza Portal may experience difficulties connecting to the following services: VSTS Team Projects, Team Services, Load Testing, Team Service Accounts and Release Management - Continuous Delivery. Customers accessing these services via other channels will be unaffected. Engineers are aware of this issue and are actively investigating. The VSTS blog is also being actively updated and can be found here: https://aka.ms/vstsblog. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Investigating

Starting at 06:08 UTC on 27 Mar 2017 a subset of customers using Visual Studio Team Services in the Ibiza Portal may experience difficulties connecting to the following services: VSTS Team Projects, Team Services, Load Testing, Team Service Accounts and Release Management - Continuous Delivery. Customers accessing these services via other channels will be unaffected. Engineers are aware of this issue and are actively investigating. The VSTS blog is also being actively updated and can be found here: https://blogs.msdn.microsoft.com/vsoservice/?p=13725 The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Investigating

An alert for Visual Studio Team Services is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Intermittent Authentication Failures due to Underlying Azure Active Directory Issue

Starting at 21:23 UTC on 24 Mar 2017 a subset of customers may intermittently experiences authentication failures while attempting access Azure resources using the Management Portal, PowerShell and command line interfaces, and authentication providers. Please note that service availability to resources is not impacted. Confirmed impacted services are: Power BI Embedded, Visual Studio Team Services, Log Analytics, Azure Data Lake Analytics, Azure Data Lake Store, Azure Data Catalog, Application Insights, Stream Analytics, Key Vault, and Azure Automation. Engineers continue to investigate an Azure Active Directory issue as an underlying cause and have applied mitigation steps. Customers should be seeing improvement now. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Intermittent Authentication Failures due to Underlying Azure Active Directory Issue

Starting at 21:23 UTC on 24 Mar 2017 a subset of customers may intermittently experiences authentication failures while attempting access Azure resources using the Management Portal, PowerShell and command line interfaces, and authentication providers. Please note that service availability to resources is not impacted. Confirmed impacted services are: Power BI Embedded, Visual Studio Team Services, Log Analytics, Azure Data Lake Analytics, Azure Data Lake Store, Azure Data Catalog, Application Insights, Stream Analytics, and Key Vault. Engineers are investigating an Azure Active Directory issue as an underlying cause and currently working on mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Intermittent Authentication Failures due to Underlying Azure Active Directory Issue

Starting at 21:23 UTC on 24 Mar 2017 a subset of customers may intermittently be unable to log into Azure resources using the Management Portal, PowerShell, and/or authorization providers utilizing cmdlet operations. Please note that service availability to resources is not impacted. Confirmed impacted services are: Power BI Embedded, Visual Studio Team Services, Log Analytics, Azure Data Lake Analytics, Azure Data Lake Store, Azure Data Catalog, Application Insights, and Key Vault. Engineers are investigating an Azure Active Directory issue as an underlying cause and currently working on mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Intermittent Login Issue

Starting at 21:23 UTC on 24 Mar 2017 a subset of customers may intermittently be unable to log into the Azure Management Portal (https://portal.azure.com or https://manage.windowsazure.com). Please note that service availability to resources is not impacted. Confirmed impacted services are: Power BI, Visual Studio Online, Log Analytics, Azure Data Lake Analytics, Application Insights, and Key Vault. Engineers are investigating an Azure Active Directory issue as an underlying cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Intermittent Failures Logging into the Management Portal

Starting at 21:23 UTC on 24 Mar 2017 customers using Microsoft Azure portal may intermittently be unable to log into the Azure Management Portal (https://portal.azure.com or https://manage.windowsazure.com). Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Investigating

Starting at 21:23 UTC on 24 Mar 2017 customers using Microsoft Azure portal may intermittently be unable to log into the Azure Management Portal (https://portal.azure.com or https://manage.windowsazure.com). Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Management Portal - Investigating

An alert for Azure Management Portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Active Directory

Starting at 22:00 UTC on 22 Mar 2017 a subset of customers using Azure Active Directory may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Downstream impact to other services may include Azure Resource Manager, Logic Apps, Azure Data Lake Analytics, Azure Data Lake Store. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory

An alert for Azure Active Directory is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Data Lake Analytics and Data Lake Store - East US 2

Starting at 15:02 UTC on 22 Mar 2017 customers using Data Lake Analytics and Data Lake Store in East US 2 may experience failures when submitting jobs, data upload, data download or access operations. Previously submitted jobs may fail. In addition, data ingress or egress operations may timeout or fail. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Data Lake Analytics and Data Lake Store - East US 2

Starting at 15:02 UTC on 22 Mar 2017 a subset of customers using Data Lake Analytics and Data Lake Store in East US 2 may experience difficulties in performing job and data plane operations on resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Data Lake Analytics and Data Lake Store - East US 2

An alert for Data Lake Store and Data Lake Analytics in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Increased latency accessing Azure resources

Summary of impact: Between 23:23 on 21 Mar 2017 and 01:35 UTC on 22 Mar 2017, a subset of customers may have experienced increased latency or network timeouts while attempting to access Azure resources with traffic passing through the East Coast.  Preliminary root cause: Engineers have identified the preliminary root cause as infrastructure impacted by a facilities issue in a 3rd party regional routing site located in South East US. Mitigation: Engineers executed standard procedures to redirect traffic and isolate the impacted facility restoring expected routing availability Next steps: Engineers will continue to investigate the full root cause and a report will be published in approximately 48-72 hours.

Last Update: A few months ago

Increased latency accessing Azure resources

Starting at 23:23 UTC on 21 March 2017, a subset of customers may experience increased latency or network timeouts while attempting to access Azure resources with traffic passing through the east coast. Engineers are applying mitigation and customers should see recovery with traffic processed through the east coast. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Increased latency accessing Azure resources

Starting at 23:23 UTC on 21 March 2017, a subset of customers may experience increased latency or network timeouts while attempting to access Azure resources with traffic passing through the east coast. Engineers have identified a potential root cause and are exploring mitigation options. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Increased latency accessing Azure resources

Starting at 23:23 UTC on 21 March 2017, a subset of customers may experience increased latency or network timeouts while attempting to access Azure resource from throughout the east coast. Engineers are aware of the issue and are investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Login failures | Stream Analytics and Log Analytics

Starting at 17:30 UTC on 21 Mar 2017 a subset of customers using Stream Analytics, Azure Log Analytics and other services may experience login failures while authenticating with their Microsoft accounts. Retries may be successful for some customers. Engineers have identified a potential root cause and are applying a mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Login failures | Stream Analytics and Log Analytics

Starting at 17:30 UTC on 21 Mar 2017 a subset of customers using Stream Analytics, Azure Log Analytics and other services may experience login failures while authenticating with their Microsoft accounts. Retries may be successful for some customers. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Login failures | Stream Analytics and Log Analytics

An alert for Log Analytics and Stream Analytics is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - West Europe

Starting at 13:45 UTC on 21 Mar 2017 a subset of customers using Storage in West Europe may experience latency when accessing their Storage resources in this region. Virtual Machine customers may also be experiencing latency as a result of this issue. Retries may be successful for some customers. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Storage - West Europe

An alert for Storage in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Log Analytics - East US

Starting at 02:47 UTC on 18 Mar 2017, a subset of customers using Log Analytics in East US may receive intermittent failure notifications - such as internal server errors, or tile or blade not loading -  when performing log search operations. Retries may be successful for some customers. Engineers are determining mitigation options, and the next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

Starting at 02:47 UTC on 18 Mar 2017, a subset of customers using Log Analytics in East US may receive intermittent failure notifications - such as internal server errors, or tile or blade not loading -  when performing log search operations. Retries may be successful for some customers. Engineers continue to evaluate a potential fix, and the next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

Starting at 02:47 UTC on 18 Mar 2017, a subset of customers using Log Analytics in East US may receive intermittent failure notifications - such as internal server errors, or tile or blade not loading -  when performing log search operations. Retries may be successful for some customers. Engineers are continuing to evaluate a potential fix, and the next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

Starting at 02:47 UTC on 18 Mar 2017, a subset of customers using Log Analytics in East US may receive intermittent failure notifications - such as internal server errors, or tile or blade not loading - when performing log search operations. Retries may be successful for some customers. Engineers have applied a potential fix for mitigation and are validating it. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

Starting at 02:47 UTC on 18 Mar 2017, a subset of customers using Log Analytics in East US may receive intermittent failure notifications - such as internal server errors, or tile or blade not loading - when performing log search operations. Retries may be successful for some customers. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - East US

An alert for Log Analytics in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage Availability in East US

SUMMARY OF IMPACT: Starting at 21:50 UTC on 15 Mar 2017 to 06:00 on 16 Mar 2017, due to a incident in East US affecting Storage, customers and service dependent on Storage may have experienced difficulties provisioning new resources or accessing their existing resources in the region. Engineering confirmed that Azure services that experienced downstream impact included Virtual Machines Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, API Management and Azure Stream Analytics. PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable. NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers are now recovering Azure services and customers should begin observing improvements in accessing resources. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 2 hours or as any new information is made available

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 2 hours or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers are working on a phased recovery per our power-event recovery plan. We are anticipating an extended recovery time for this incident. The next update will be provided in 2 hours or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Data center technicians are on site working to restore power to the scale unit. We are anticipating an extended recovery time for this incident. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Storage Availability in East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Customers in this region may also experience failures when trying to access a subset of their Virtual Machines. Engineers have identified a potential root cause and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage provisioning impacting multiple services

Starting at 22:42 UTC on 15 Mar 2017, customers using Storage may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Other services that leverage Storage may also be experiencing impact. Retries may be successful. In addition, a subset of customers in East US may be unable to access their Storage accounts. Engineers have identified a possible fix for the underlying cause, and are applying mitigations. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage provisioning impacting multiple services

Starting at 22:42 UTC on 15 Mar 2017, customers using Storage may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Other services that leverage Storage may also be experiencing impact. Retries may be successful. In addtion, a subset of customers in East US may be unable to access their Storage accounts. Engineers have identified a possible fix for the underlying cause, and are applying mitigations. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Virtual Machines or Cloud Services customers may experience failures when attempting to provision resources. Azure Search customers may be unable to create, scale, or delete services. Azure Monitor customers may be unable to turn on diagnostic settings for resources. Azure Site Recovery customers may experience replication failures. API Management 'service activation' in South India will experience a failure. Azure Batch customers will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. EventHub customers using a service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. Customers using API Management service activation in South India will experience a failure. Customers using Azure Batch will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. Customers using the feature of the EventHub service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. Customers using API Management service activation in South India will experience a failure. Customers using Azure Batch will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. Customers using the feature of the EventHub service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. The next update will be provided in 60 minutes or as events warrant. First Detailed Post  Don’t forget to update SHD Trends! Is there a workaround? What is the customer seeing? What is not working? Is this service management or availability failure? Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - After Action Report: Contact Info CM who wrote Comms on incident? James/Sami Contacted at: 03/15/17 03:03 PM PST How Contacted ICM auto call SMS Skype IM E-mail LSI Handoff if applicable Bridge info Who was LSI handed off to if applicable Time of handoff: SLA Details Missed Advisory SLA? (15 min) Yes No Reason SLA was Missed Engineers unable to confirm customer impact Time Engineers Confirmed impact: HH:MM AM/PM PDT Please provide notes and time stamp if applicable Bridge Issues Time bridge issue resolved: HH:MM AM/PM PDT Post Facto Mats Issues Time issue was fixed: HH:MM:AM/PM PDT Other: Notes on why SLA was missed if applicable Post Details Posted to: SHD Portal SubIDs Time received SubIDs: #Accounts notified in MATS Social/SHD Trend # of Tweets Reach: Did Social Detect First? Yes HH:MM AM/PM PDT No SHD Trend: Updated DRS / RCA Has an RCA been issued? Yes No Posted to: SHD Portal # of SubIDs toasted: Link to ICM Request: IcM Postmortem Internal Postmortem completed? Yes No Postmortem created Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

First Detailed Post  Don’t forget to update SHD Trends! Is there a workaround? What is the customer seeing? What is not working? Is this service management or availability failure? Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - After Action Report: Contact Info CM who wrote Comms on incident? James/Sami Contacted at: 03/15/17 03:03 PM PST How Contacted ICM auto call SMS Skype IM E-mail LSI Handoff if applicable Bridge info Who was LSI handed off to if applicable Time of handoff: SLA Details Missed Advisory SLA? (15 min) Yes No Reason SLA was Missed Engineers unable to confirm customer impact Time Engineers Confirmed impact: HH:MM AM/PM PDT Please provide notes and time stamp if applicable Bridge Issues Time bridge issue resolved: HH:MM AM/PM PDT Post Facto Mats Issues Time issue was fixed: HH:MM:AM/PM PDT Other: Notes on why SLA was missed if applicable Post Details Posted to: SHD Portal SubIDs Time received SubIDs: #Accounts notified in MATS Social/SHD Trend # of Tweets Reach: Did Social Detect First? Yes HH:MM AM/PM PDT No SHD Trend: Updated DRS / RCA Has an RCA been issued? Yes No Posted to: SHD Portal # of SubIDs toasted: Link to ICM Request: IcM Postmortem Internal Postmortem completed? Yes No Postmortem created Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage provisioning impacting multiple services

Starting at 22:42 UTC on 15 Mar 2017, customers using Storage may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Other services that leverage Storage may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard . The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage

An alert for Storage is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Media Services and Media Services \ Streaming - Multiple Regions

Starting at 00:30 UTC on 15 Mar 2017, a subset of customers using Media Services \ Streaming and Media Services may experience issues authenticating or creating new Media Services accounts and the streaming of encrypted/protected media may be degraded. Channel operations may also experience latency or failures. Engineers are actively investigating potential mitigation paths. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Azure Media Services and Media Services \ Streaming - Multiple Regions

Starting at 00:30 UTC on 15 Mar 2017, a subset of customers using Media Services \ Streaming and Media Services may experience issues authenticating or creating new Media Services accounts and the streaming of encrypted/protected media may be degraded. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Azure Media Services and Media Services \ Streaming - Multiple Regions

An alert for Media Services and Media Services \ Streaming in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Express Route - Advisory

Our investigation of alerts for ExpressRoute in is complete. Due to the extremely limited number of customers impacted by this issue, we are providing direct communication to those experiencing an issue via http://portal.azure.com and http://manage.windowsazure.com. This message will remain active for 30 minutes before being closed.

Last Update: A few months ago

Express Route - Advisory

An alert for ExpressRoute is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 02:20 UTC on 14 Mar 2017, a subset of customers using Virtual Machines in West Europe may receive creation failure notifications when provisioning new Virtual Machines in this region. Failures may also occur when scaling or resizing Virtual Machines. This will not affect the availability of currently running Virtual Machines. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 02:20 UTC on 14 Mar 2017, a subset of customers using Virtual Machines in West Europe may receive creation failure notifications when provisioning new Virtual Machines in this region. Failures may also occur when scaling or resizing Virtual Machines. This will not affect the availability of currently running Virtual Machines. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at 02:20 UTC on 14 Mar 2017, a subset of customers using Virtual Machines in West Europe may receive creation failure notifications when provisioning new Virtual Machines in an existing availability set or when creating new availability sets in this region. This issue does not impact the availability of existing Virtual Machines. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

An alert for Virtual Machines in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Customers unable to create resources within the Azure Portal

We are investigating reports of issues for some customers creating resources in the Azure portal (portal.azure.com). We will update as more is known.

Last Update: A few months ago

Customers unable to download invoice from portal.azure.com

Starting at approximately 16:00 UTC on 09 March 2017, customers may experience difficulties downloading their invoices through the Azure Management Portal ( https://portal.azure.com ). Engineers have identified the underlying root cause as a backend configuration error. Engineers are continuing to implement mitigation steps. As a workaround, customers can access and download their invoices through https://account.windowsazure.com . The next update will be in 2 hours, or as events warrant.

Last Update: A few months ago

Customers unable to download invoice from portal.azure.com

Starting at approximately 16:00 UTC on 09 March 2017, customers may experience difficulties downloading their invoices through the Azure Management Portal ( https://portal.azure.com ). Engineers have identified the root cause and are testing the mitigation fix. As a workaround, customers can access and download their invoices through https://account.windowsazure.com . The next update will be in 2 hours, or as events warrant.

Last Update: A few months ago

Customers unable to download invoice from portal.azure.com

Starting at approximately 16.00 UTC on 09 March 2017, customers may experience difficulties downloading their invoices through the Azure Management Portal ( https://portal.azure.com ). Engineers have found a potential root cause and are deploying a possible fix. As a workaround, customers can access and download their invoices through https://account.windowsazure.com . The next update will be in 2 hours, or as events warrant.

Last Update: A few months ago

Customers unable to download invoice from portal.azure.com

Starting at approximately 16.00 UTC on 09 March 2017, customers may experience difficulties downloading their invoices through the Azure Management Portal ( https://portal.azure.com ). Engineers have found a potential root cause and are exploring possible mitigation options. As a workaround, customers can access and download their invoice through https://account.windowsazure.com Additional information will be provided as it is known.

Last Update: A few months ago

Customers unable to download invoice from portal.azure.com

Starting at approximately 16.00 UTC on 09 March 2017, customers may experience difficulties downloading their invoices through the Azure Management Portal ( https://portal.azure.com ). Engineers are aware of the issue and are actively investigating. As a workaround, customers can access and download their invoice through https://account.windowsazure.com The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - Japan East

Between 12:42 and 14:38 UTC on 08 Mar 2017, customers leveraging Storage-dependent services in Japan East may have experienced issues accessing some of their services in this region. The Storage incident is mitigated, but some SQL Database customers may continue to experience some connectivity issues during the recovery phase. Engineers are continuing to monitor the recovery progress and the next update will be provided within 120 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - Japan East

Between 12:42 and 14:38 UTC on 08 Mar 2017, customers leveraging Storage-dependent services in Japan East may have experienced issues accessing some of their services in this region. The Storage incident is mitigated, but some SQL Database customers may continue to experience some connectivity issues during the recovery phase. Engineers are continuing to monitor the recovery progress and the next update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Japan East

Summary of impact: Between 12:42 and 14:38 UTC on 08 Mar 2017, a subset of customers using App Service \ Web Apps, Redis Cache, StorSimple, Logic Apps, Stream Analytics and IoT Hub in Japan East may have experienced difficulties connecting to resources hosted in this region related to a Storage incident in this region. Full resolution will be provided once the Storage issue is fully mitigated.

Last Update: A few months ago

Multiple Services - Japan East

Between 12:42 and 14:38 UTC on 08 Mar 2017, customers leveraging Storage-dependent services in Japan East may have experienced issues accessing some of their services in this region. The Storage incident is mitigated, but some of the services are still in a recovery phase. These services include: App Service \ Web Apps, Redis Cache, StorSimple, Logic Apps, SQL Database, Stream Analytics and IoT Hub. Engineers are continuing to monitor the recovery progress and the next update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, as a result of a Storage incident in Japan East a subset of customers using App Service \ Web Apps, Site Recovery, Virtual Machines, Redis Cache, Data Movement, StorSimple, Logic Apps, Media Services, Key Vault, HDInsight, SQL Database, Automation, Stream Analytics, Backup, IoT Hub and Cloud Services in Japan East customers may experience issues accessing their services in this region. Engineers have applied a mitigation and some services should be seeing improvements in availability, the next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed below. Engineers have applied a mitigation, and most services should be seeing a return to healthy state at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed below. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, as a result of a Storage incident in Japan East a subset of customers using App Service \ Web Apps, Site Recovery, Virtual Machines, Redis Cache, Data Movement, StorSimple, Logic Apps, Media Services, Key Vault, HDInsight, SQL Database, Automation, Stream Analytics and Cloud Services in Japan East customers may experience issues accessing their services in this region. Engineers are actively investigating this issue, and the next update will be provided in 60 minutes, or as event warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage and SQL Database in this region may also be experiencing impact related to this, and these are detailed below. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Storage and App Service \ Web Apps - Japan East

An alert for Storage and App Service \ Web Apps in Japan East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - West Europe | Investigating

Starting as early as approximately 13:15 UTC on 07 Mar 2017, a subset of customers using Virtual Machines in West Europe may receive intermittent creation failure notifications when provisioning new D-Series V2 Virtual Machines in this region. Failures may also occur when scaling or resizing a D-series V2 Virtual Machines. Engineers are aware of this issue and are actively investigating. D-Series V1 Virtual Machines are not affected by this issue, and customers can provision these as a possible workaround. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - West US | Investigating

An alert for Virtual Machines in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

App Services \ Web Apps

SUMMARY OF IMPACT: Between 10:00 and 12:04 UTC on 07 Mar 2017, a subset of customers may have experienced high latency when viewing recently deployed App Service resources in the Azure Management Portal (https://portal.azure.com). Customers may also have experienced difficulties viewing newly created App Service plans. These resources were created successfully and are running as expected. PRELIMINARY ROOT CAUSE: At this stage engineers do not have a definitive root cause. MITIGATION: Engineers performed a manual failover of a back end service to mitigate the issue. NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

Logic Apps

At 06:39 UTC on the 7th of March 2017, Engineers received a monitoring alert for Logic Apps in Multiple Regions. We have concluded our investigation of the alert for Logic Apps, and determined that for the vast majority of customers, no impact was experienced. A limited subset of customer may experience residual impact from this issue, and they will receive separate communications on this via their management portal (portal.azure.com). Users who are seeing impact can self-mitigate by updating their SQL action triggers in their logic apps to use a new connection.

Last Update: A few months ago

Logic Apps

Starting at 06:30 UTC on 07 Mar 2017, customers using Logic Apps may experience difficulties connecting to their SQL connectors. Engineers are aware of this issue and are actively investigating. The SQL connections used in azure logic apps have been impacted by a recent deployment update and some customers will see connections to their SQL connectors fail. Mitigation: Users who are seeing impact can self mitigate by updating their SQL action triggers in their logic apps to use a new connection. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Logic Apps

An alert for Logic Apps is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database

An alert for SQL Database is being investigated. More information will be provided as it is known.

Last Update: A few months ago

App Service \ Web Apps | API call failures for VNET and Hybrid Connections - Multiple Regions

Starting as early as 14:25 UTC on 06 Mar 2017, a very small subset of customers using App Service \ Web Apps in Multiple Regions may receive API call failure when setting up a new VNET and Hybrid connection or modifying existing connections. Availability or runtime operations are not impacted at this time. As a workaround, we recommend customers utilize Powershell or Command Line Interface (CLI) to make the API calls. We will be sending detailed information to impacted customers in their Management Portal (https://portal.azure.com/) as updates occur.

Last Update: A few months ago

App Service \ Web Apps | API call failures for VNET and Hybrid Connections - Multiple Regions

Starting at 14:25 UTC on 06 Mar 2017, a small subset of customers using App Service \ Web Apps in Multiple Regions may receive API call failure when setting up a new VNET and Hybrid connection or modifying existing connections. Availability and runtime operations are not impacted at this time. As a workaround, we recommend customers utilize Powershell or Command Line Interface (CLI) to make the API calls. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

App Service \ Web Apps | API call failures for VNET and Hybrid Connections - Multiple Regions

An alert for App Service \ Web Apps in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Cognitive Services | Degraded API performance

Starting at 13:20 UTC on 04 Mar 2017, a subset of customers using Cognitive Services \ Bing Search API, Bing Speech API, Bing Autosuggest API, Bing Spell Check API, Translator Text API and Translator Speech API in multiple regions may receive intermittent failure notifications when performing API calls. Errors may include HTTP error 500 or Gateway Time Out. Engineers have identified a possible underlying root cause, and have taken mitigation steps which are being validated. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Cognitive Services | Degraded API performance

Starting at 13:20 UTC on 04 Mar 2017, a subset of customers using Cognitive Services \ Bing Search API, Bing Speech API, Bing Autosuggest API, Bing Spell Check API, Translator Text API and Translator Speech API in multiple regions may receive intermittent failure notifications when performing API calls. Errors may include HTTP error 500 or Gateway Time Out. Engineers have identified a possible underlying root cause, and are working to determine mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Cognitive Services | Degraded API performance

Starting at 13:20 UTC on 04 Mar 2017, a subset of customers using Cognitive Services \ Bing Search APIs in multiple regions may receive intermittent failure notifications when performing API calls against Cognitive Service APIs. Errors may include HTTP 500 / HTTP 401 or ' Invalid key token'. Other Cognitive Services which rely on the Bing API may also exhibit similar symptoms. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Cognitive Services – Investigating

An alert for Cognitive Services is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 21:25 UTC on 02 Mar 2017, a subset of customers using Log Analytics in West Europe may experience search failures in this region. Retries may be successful for some customers. Engineers have developed and are currently verifying a fix for this issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 21:25 UTC on 02 Mar 2017, a subset of customers using Log Analytics in West Europe may experience search failures in this region. Retries may be successful for some customers. Engineers have identified that backend systems are not communicating as they should, and are continuing to investigate possible routes to mitigate the issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 21:25 UTC on 02 Mar 2017, a subset of customers using Log Analytics in West Europe may experience search failures in this region. Retries may be successful for some customers. Engineers have identified a possible underlying cause, and are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

Starting at 21:25 UTC on 02 Mar 2017, a subset of customers using Log Analytics in West Europe may experience search failures in this region. Retries may be successful for some customers. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Log Analytics - West Europe

An alert for Log Analytics in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Unable to Access Accounts / Recovered

Engineers are aware of a recent issue for Visual Studio Team Services which has now been mitigated. Next update will be provided in 60 minutes or as events warrant. More information at https:/aka.ms/vstsblog .

Last Update: A few months ago

Virtual Machines and Cloud Services - East Asia

SUMMARY OF IMPACT: Between 14:48 and 17:40 UTC on 01 Mar 2017, a subset of customers using Virtual Machines and Cloud Services in East Asia may have received failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. PRELIMINARY ROOT CAUSE: The load balancing service hosting the management endpoints in the region became temporarily unavailable, impacting traffic for service management operations.  MITIGATION: Engineers performed a manual failover which mitigated the issue. NEXT STEPS: We will conduct a comprehensive root cause analysis and provide a report within approximately 72 hours.

Last Update: A few months ago

Virtual Machines and Cloud Services - East Asia | Investigating

Starting at 14:48 UTC on 01 Mar 2017, a subset of customers using Virtual Machines and Cloud Services in East Asia may have experienced difficulties performing service management operations. Engineers have determined that this was caused by an underlying Storage incident on a single scale unit. Engineers are validating a recovery fix for this incident and customers should begin to see signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines and Cloud Services - East Asia | Investigating

Starting at 14:48 UTC on 01 Mar 2017, a subset of customers using Virtual Machines and Cloud Services in East Asia may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines & Cloud Services - East Asia | Investigating

Starting at 14:48 UTC on 01 Mar 2017, a subset of customers using Virtual Machines & Cloud Services in East Asia may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - East Asia | Investigating

Starting at 14:48 UTC on 01 Mar 2017, a subset of customers using Virtual Machines in East Asia may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Advisory

Starting at 14:48 UTC on 01 Mar 2017, a subset of customers using Virtual Machines in East Asia may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Advisory

An alert for Virtual Machines in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

HockeyApp - Advisory

Starting at approximately 17:40 UTC on 28 Feb 2017, customers using HockeyApp may experience issues with Build and Crash processing. More information will be provided as it is known.

Last Update: A few months ago

Hockey App - Advisory

An alert for HockeyApp is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Hockey App - Advisory

An alert for HockeyApp is being investigated. More information will be provided as it is known.

Last Update: A few months ago

HDInsight and SQL Database - East US

Starting at 17:57 UTC on 23 Feb 2017, a subset of customers using SQL Database and HDInsight in East US may experience failures or timeouts when performing service management operations - such as create, update, delete - in this region. Availability (connecting to and using existing databases or HDInsight clusters) may be impacted if the server was in the process of being upgraded. Some SQL customers may be seeing signs of recovery. Engineers are continuing to pursue multiple paths for mitigation. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US

Starting at 17:57 UTC on 23 Feb 2017, a subset of customers using SQL Database and HDInsight in East US may experience failures or timeouts when performing service management operations - such as create, update, delete - in this region. Availability (connecting to and using existing databases or HDInsight clusters) may be impacted if the server was in the process of being upgraded. Engineers are continuing to investigate multiple mitigation paths and are continuing to investigate. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US

Starting at 17:57 UTC on 23 Feb 2017, a subset of customers using SQL Database and HDInsight in East US may experience failures or timeouts when creating servers, databases, and HDInsight clusters in this region. Customers will find that requests for provisioning new SQL databases, SQL database servers, or HDInsight clusters may take longer than expected and could potentially timeout as a result. In many cases, a retry initiated by customers will be successful. Availability (connecting to and using existing databases or HDInsight clusters) is not impacted. Engineers are exploring multiple mitigation paths and are continuing to investigate. The next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US and South Central US

Starting at 17:57 UTC on 23 Feb 2017, customers using SQL Database and HDInsight may experience issues performing service management operations. All East US customers will see service management failures or timeouts when creating servers, databases, and HDInsight clusters. HDInsight and SQL Database customers in South Central US are now considered mitigated. Availability (connecting to and using existing databases or HDInsight clusters) is not impacted. Additional engineering teams have been added to expedite investigation into the underlying cause in order to determine mitigation paths. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US and South Central US

Starting at 17:57 UTC on 23 Feb 2017, customers using SQL Database and HDInsight may experience issues performing service management operations. All East US customers will see service management failures or timeouts when creating servers, databases, and HDInsight clusters. HDInsight customers in South Central US are now considered mitigated. Availability (connecting to and using existing databases or HDInsight clusters) is not impacted. Additional engineering teams have been added to expedite investigation into the underlying cause in order to determine mitigation paths. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US and South Central US

Starting at 17:57 UTC on 23 Feb 2017, customers using SQL Database and HDInsight may experience issues performing service management operations. All East US customers will see service management failures or timeouts when creating servers, databases, and HDInsight clusters. Customers in South Central US may see intermittent create failures. Availability (connecting to and using existing databases or HDInsight clusters) is not impacted. Engineers are continuing their investigation into the underlying cause in order to determine mitigation paths. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US and South Central US

Starting at 17:57 UTC on 23 Feb 2017, a subset of customers using SQL Database and HDInsight in East US and South Central US may experience issues performing service management operations. Creating servers and databases may result in an error or timeout. In addition, HDInsight may not be able to create clusters. Availability (connecting to and using existing databases or HDInsight clusters) is not impacted. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

HDInsight and SQL Database - East US and South Central US

An alert for HDInsight and SQL Database in East US and South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Services | Resolved - West US 2

Between 09:50 UTC and 19:21 UTC on 19 Feb 2017, a subset of customers in West US 2 using Storage and Azure IoT Hub may have experienced issues accessing their services in this region due to an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved, and have confirmed the above services are mitigated. A detailed root cause analysis report will be provided within 72 hours.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Cloud Services, Storage, Azure Monitor, Activity Logs, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: Virtual Machines, SQL Database, Backup, Site Recovery, Redis Cache, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Storage and IoT Hub may experience timeouts or errors when accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to mitigate the final impact to storage. A majority of customers will see recovery. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Cloud Services, Storage, Azure Monitor, Activity Logs, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: Virtual Machines, SQL Database, Backup, Site Recovery, Redis Cache, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, Cloud Services, Storage, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: SQL Database, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, DocumentDB, Azure DevTest Labs, Service Bus, and Event Hub, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services, the following services have confirmed mitigation: Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The alert has been investigated, and engineers are currently undertaking a structured restart of the scale units involved. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Visual Studio Online, and DocumentDB as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers are continuing to investigate, and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage, SQL Database, and associated services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Visual Studio Online, and DocumentDB as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage, SQL Database, and associated services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services and Storage, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage, SQL Database, and associated services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers using Virtual Machines, SQL Database, Cloud Services and Storage in West US 2 may experience issues accessing their services in this region. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage and SQL Database - West US 2

An alert for Virtual Machines, Cloud Services, Storage and SQL Database in West US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines and Cloud Services - West US 2

An alert for Virtual Machines and Cloud Services in West US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure -Central US

Engineers are aware of a recent issue for Network Infrastructure in Central US which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Logic Apps | Investigating

Starting at 24:00 UTC on 16 Feb 2017, a subset of customers using Logic Apps connecting to Azure Functions may receive intermittent timeouts or failures when connecting to resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Logic Apps | Investigating

An alert for Logic Apps is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Cognitive Services | Resolved

Engineers are aware of a recent issue for Cognitive Services which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - Disk Blades not Loading for Custom VHDs

Starting at 06:21 UTC on 08 Feb 2017, a subset of customers using the Microsoft Azure portal (https://portal.azure.com) may not be able to load disk blades associated with Azure Resources Manager (ARM) Virtual Machines custom images. There is no impact to service availability or to service management operations. At this time, engineers expect PowerShell to be a workaround option. Engineers identified a recent deployment as a possible underlying cause. Engineers are finishing deploying a hotfix for mitigation and will then verify the mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - Capture VHD and/or Resizing Disk Blades Not Loading

Starting at 06:21 UTC on 08 Feb 2017, a subset of customers using the Microsoft Azure portal (https://portal.azure.com) may not be able to load disk blades associated with Azure Resources Manager (ARM) Virtual Machines custom images. There is no impact to service availability and to service management operations. At this time, engineers expect PowerShell to be a workaround option. Engineers identified a recent deployment as a possible underlying cause. Engineers are currently deploying a hotfix for mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - Capture VHD and/or Resizing Disk Blades Not Loading

Starting at 06:00 UTC on 09 Feb 2017, a subset of customers using the Microsoft Azure portal (https://portal.azure.com) may not be able to load Azure Resources Manager (ARM) Virtual Machine blades when capturing VHD URI or when resizing disks. There is no impact to service availability and to service management operations. At this time, engineers expect PowerShell to be a workaround option. Engineers identified a recent deployment as a possible underlying cause and verifying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - Capture VHD and/or Resizing Disk Blades Not Loading

An alert for Microsoft Azure portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Web Apps and other services connecting to SQL Databases in North Central US

Starting as early as 13:00 UTC on 08 Feb 2017, a subset of customers using Web Apps or other services hosted on Virtual Machines may experience intermittent issues or failure notifications while attempting to connect their Web Apps or other services to SQL Databases in North Central US. Engineers are in the final steps of mitigation. Most customers should be recovered at this time. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Web Apps connecting to SQL Databases in North Central US

Starting as early as 13:00 UTC on 08 Feb 2017, a subset of customers using Web Apps in Multiple Regions may experience intermittent issues or failure notifications while attempting to connect to SQL Databases in North Central US. Engineers have completed one step of mitigation and customers may be starting to see recovery. Engineers have begun the final steps of mitigation. The next update will be provided in 2 hours, or as events

Last Update: A few months ago

SQL Database - North Central US | Investigating

Starting as early as 13:00 UTC on 08 Feb 2017, a subset of customers using Web Apps in Multiple Regions may experience intermittent issues or failure notifications while attempting to connect to SQL Databases in North Central US. Engineers have identified a previous deployment as an underlying root cause. Customers may see recovery due to previous mitigation steps taken. Additionally, engineers are also rolling back the recent deployment task. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

SQL Database - North Central US | Investigating

Starting as early as 13:00 UTC on 08 Feb 2017, a subset of customers using Web Apps in Multiple Regions may experience intermittent issues or failure notifications while attempting to connect to SQL Databases in North Central US. Engineers have applied partial mitigation and some customers may see recovery. Engineers are continuing to apply additional mitigation steps. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

SQL Database - North Central US | Investigating

Starting as early as 13:00 UTC on 08 Feb 2017, a subset of customers using Web Apps in Multiple Regions may experience intermittent issues or failure notifications while attempting to connect to SQL Databases in North Central US. Engineers have identified a possible fix for the underlying cause, and are currently implementing mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Central US | Investigating

Starting as early as 13:00 UTC on 08 Feb 2017, a subset of customers using SQL Database in North Central US may experience intermittent issues or failure notifications while attempting to connect databases in this region to web app resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Central US | Investigating

An alert for SQL Database in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Active Directory B2C | Requests Failing

Engineers are aware of a recent issue for Azure Active Directory B2C which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services issue under investigation: Multiple Regions

Starting at 20:43 UTC on 06 Feb 2017, a small subset of customers trying to deploy cloud service package via Visual Studio in multiple regions may receive the error: "Value cannot be null" after "Apply Diagnostics Extension step". Engineers are aware of this issue and are actively investigating. The next update will be provided as events warrant.

Last Update: A few months ago

SQL Database - North Europe | Investigating

Starting at 18:00 UTC on 06 Feb 2017, a subset of customers using SQL Database in North Europe may experience intermittent control plane unavailability. Retrieving information about servers and databases in the region through the portal may fail.  Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout.  Availability (connecting to and using existing databases) is not impacted. Customers will also see deployment failure notifications when creating new HDInsight clusters in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database and HDInsight - North Europe

An alert for SQL Database and HDInsight in North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Unable to view metrics in Azure portal

Starting at 11:40 UTC on 03 Feb 2017, customers may be unable to view metrics for their resources hosted in multiple regions. Tiles in the portal may show a grey cloud. Impacted service may include Virtual Machines, Cognitive Services, App Service\Web Apps, Event Hubs. Directly calling the Azure Insights API may also result in failures. This does not impact service availability . Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Unable to view metrics in Azure portal

Starting at 11:40 UTC on 03 Feb 2017, customers may be unable to view metrics for their resources hosted in multiple regions. Tiles in the Azure Management Portal may show a grey cloud. Directly calling the Azure Insights API may result in failures. This does not impact service availability . Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Unable to view metrics in Azure portal | Multiple regions.

An alert for Alerts & Metrics in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

App Service - East US

An alert for App Service in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Active Directory - Multi-Region

Starting at 09:00 UTC on 03 Feb 2017, customers using Azure Active Directory, or services with dependencies on Azure Active Directory, may experience intermittent issues connecting to their resources. Customer may be unable to view Active Directory information blades in the Azure Management Portal or also receive 503 - timeout errors. Engineers have now deployed a mitigation and are monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Multi-Region

Starting at 09:00 UTC on 03 Feb 2017, customers using Azure Active Directory, or services with dependencies on Azure Active Directory, may experience intermittent issues connecting to their resources. Customer may be unable to view Active Directory information blades in the Azure Management Portal or also receive 503 - timeout errors. Engineers have now deployed a mitigation and customers should now be seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Multi-Region

Starting at 04:46 UTC on 03 Feb 2017, customers using Azure Active Directory in North and West Europe may experience intermittent difficulties connecting to resources dependant on Active Directory hosted in these region. Customer may be unable to view Active Directory information blades in the Azure Management Portal or also receive 503 / Time Outs errors. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Active Directory - Multi-Region

An alert for Azure Active Directory in North Europe and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Investigating - Azure Directory Tiles not displayed

We are currently investigating an issue where Azure Directory Tiles may fail to load in the Azure Management Portal.

Last Update: A few months ago

Cognitive Services \ Translator Speech API and Cognitive Services \ Translator Text API - Multiple Regions

Starting at 16:46 UTC on 30 Jan 2017, a subset of customers using Cognitive Services \ Translator Speech API and Cognitive Services \ Translator Text API in Multiple Regions may receive error notifications when attempting to perform API calls against the Cognitive Service authentication endpoint. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

Cognitive Services \ Translator Speech API and Cognitive Services \ Translator Text API - Multiple Regions

An alert for Cognitive Services \ Translator Speech API and Cognitive Services \ Translator Text API is being investigated. More information will be provided as it is known.

Last Update: A few months ago

azuredatacatalog.com Site Unresponsive

An alert for azuredatacatalog.com is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines (v2) - Australia East

Starting at 21:09 UTC on 25 Jan 2017, a subset of customers using Virtual Machines in Australia East may receive creation failure notifications when provisioning new Virtual Machines using the New Management Portal (https://portal.azure.com) or PowerShell in this region. Engineers are aware of this issue and are actively investigating. As a temporary workaround, we recommend using the Classic Portal (https://manage.windowsazure.com) to deploy new Virtual Machines. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - Australia East

An alert for Virtual Machines in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Unable To Access Accounts | Recovered

Engineers are aware of a recent issue for Visual Studio Team Services, which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Multiple Regions

Engineers are aware of a recent issue for Microsoft Azure portal which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Multiple Regions

An alert for Microsoft Azure portal in Multi-Region is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Power BI Embedded - North Europe | Investigating

Starting at 11:21 UTC on 17 Jan 2017, a subset of customers using Power BI Embedded in North Europe may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Management Portal - Virtual Machine Size Blades not visible

Starting at 01:00 UTC on 11 Jan 2017, a subset of customers may not able to view their ‘Virtual Machine Size’ blades and are additionally unable to make changes to the size of their Virtual Machines through the management portal (https://portal.azure.com). Engineers have determined that this is only impacting customers who have created ARM Virtual Machines using a custom image. As a workaround, customers may use PowerShell or Command Line Interface to change their Virtual Machine size. More information can be found here: https://aka.ms/resizevm-arm Engineers are deploying a potential mitigation and some customer may be observing recovery. Next update will be provide within 1 hour or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Issues Viewing Resources

Starting at approximately 22:15 UTC on 10 Jan 2017, customers may experience high latency when viewing recently deployed resources, or intermittently receive failure notifications when attempting to deploy new resource groups or resources to new resource groups in the Azure Portal, https://portal.azure.com. Additionally, customers may encounter issues viewing existing resources in their portals, however, all resources continue to exist and are running as expected. Newly provisioned resources were created successfully. Customers may see these resources intermittently in their Portals and may be able to intermittently manage them through PowerShell or the Command Line Interface. Engineers have identified a possible underlying cause as an increase in backlog requests, and are actively applying mitigation steps. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Issues Viewing Resources

As early as approximately 23:00 UTC on 09 Jan 2017, customers may experience high latency when viewing recently deployed resources, or intermittently receive failure notifications when attempting to deploy new resource groups or resources to new resource groups in the Azure Portal, https://portal.azure.com. Newly provisioned resources were created successfully and are running as expected. Customers may see these resources intermittently in their Portals and may be able to intermittently manage them through PowerShell or the Command Line Interface. Engineers have identified a possible underlying cause, and are working towards mitigation. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - issues viewing resources

As early as approximately 23:00 UTC on 09 Jan 2017, customers may experience high latency when viewing recently deployed resources in the Azure Portal, https://portal.azure.com . These resources were created successfully and are running as expected, and will directly appear in customer Portals after a brief delay. As a workaround, customers can manage newly provisioned resources programmatically through PowerShell or the Command Line Interface until they appear in the Azure Portal. Engineers are aware of the issue and actively working on mitigation. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - issues viewing resources

An alert for Microsoft Azure portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage and Virtual Machines - West US 2

Starting at 22:12 on 10 Jan 2017, a subset of customers using Virtual Machines in West US 2 may experience restarts and connection failures when trying to access Virtual Machines hosted in this region.  Concurrently, a subset of customers using Storage in West US 2 may experience higher than expected latency, timeouts or HTTP 500 errors when accessing data stored on Storage accounts hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US 2

Engineers are aware of a recent issue for Storage and Virtual Machines in West US 2 which our telemetry indicates is mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Azure Machine Learning - South Central US, Southeast Asia, and West Europe

An alert for Machine Learning in South Central US, Southeast Asia and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure portal - issues viewing new resources or subscriptions

As early as approximately 08:00 UTC on 10 Jan 2017, customers may experience high latency when viewing recently deployed services in the Azure Portal, https://portal.azure.com . Customers may also experience difficulties viewing newly created subscriptions in the Azure Portal. These resources were created successfully and running as expected. As a workaround, customers can view and manage newly provisioned services and subscriptions in the Azure Classic Portal, https://manage.windowsazure.com, or programmatically through the Command Line Interface. Engineers are performing a backend configuration change, and taking steps to verify the health of the service. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - issues viewing new resources or subscriptions

As early as approximately 08:00 UTC on 10 Jan 2017, customers may experience high latency when viewing recently deployed services in the Azure Portal, https://portal.azure.com . Customers may also experience difficulties viewing newly created subscriptions in the Azure Portal. These resources were created successfully and running as expected. As a workaround, customers can view and manage newly provisioned services and subscriptions in the Azure Classic Portal, https://manage.windowsazure.com, or programmatically through the Command Line Interface. Engineers are currently developing a hotfix to mitigate this issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure portal - issues viewing new resources or subscriptions

An alert for Microsoft Azure portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Services - South India

Engineers are aware of a recent issue for multiple services in South India which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Azure Functions - Region Selection Issues in Portal | Investigating

Starting at 00:35 UTC on 22 Dec 2016, customers using Azure Functions may not be able to select regions in the Management Portal and the Functions dashboard when creating applications. Customers provisioning new App Service \ Web Apps in Visual Studio may also experience the inability to select regions. Engineers identified a software issue in a recent deployment for an underlying cause and are applying mitigation. The next update will be provided in 4 hours, or as events warrant.

Last Update: A few months ago

Azure Functions - Region Selection Issues in Portal | Investigating

Starting at 18:10 UTC on 22 Dec 2016, customers using Azure Functions may not be able to select regions in the Management Portal when creating Function applications. Customers provisioning new App Service \ Web Apps in Visual Studio may also experience the inability to select regions. Engineers are investigating a recent deployment as a potential underlying cause and are working on mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Azure Functions - Region Selection Issues in Portal | Investigating

Starting at 18:10 UTC on 22 Dec 2016, customers using Azure Functions may not be able to select regions in the Management Portal when creating Function applications. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services

Starting at 23:55 UTC on 22 Dec 2016, a subset of customers using Visual Studio Team Services, Visual Studio Team Services \ Build & Deployment, and Visual Studio Team Services \ Load Testing may intermittently experience degraded performance and slowness or 500 errors while accessing accounts or navigating through Visual Studio Online workspaces. Retries may be successful for some customers. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant. More information can be found on https://aka.ms/vstsblog

Last Update: A few months ago

Visual Studio Team Services

An alert for Visual Studio Team Services, Visual Studio Team Services \ Load Testing and Visual Studio Team Services \ Build & Deployment/Build (XAML) in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - West Europe

An alert for Network Infrastructure in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

HDInsight - North Central US

Starting at 22:30 UTC on 09 Dec 2016, customers using HDInsight in North Central US may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers have identified a recent change in backend systems as a possible underlying cause, and are working to roll back the change to mitigate the issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

HDInsight - North Central US

Starting at 22:30 UTC on 09 Dec 2016, customers using HDInsight in North Central US may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

HDInsight - North Central US

An alert for HDInsight in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - North Central US

Starting at 10:00 UTC on 09 Dec 2016, customers using SQL Database in North Central US may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. Engineers discovered a backend ecosystem that fell into an unhealthy state and are currently performing mitigation measures. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Central US

Starting at approximately 10:00 UTC on 09 Dec 2016, customers using SQL Database in North Central US may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. Engineers are aware of this issue and are actively taking steps to mitigate the issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - North Central US

An alert for SQL Database in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines (v2) - South Central US

Starting as early as approximately 20:00 UTC on 07 Dec 2016, customers using Virtual Machines (v2) in South Central US may receive failure notifications when attempting to perform networking-related update operations (such as network interfaces, NAT rules, or load balancing) to existing Virtual Machine (v2) configurations. This only affects networking-related operations on Virtual Machines (v2) as all other service management operations (such as Start, Stop, Create, Delete) on all Virtual Machines are functional. Engineers have identified a software error in a recent deployment as preliminary root cause and have created a software update to mitigate. This software update is currently being deployed. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines (v2) - South Central US

Starting as early as approximately 20:00 UTC on 07 Dec 2016, customers using Virtual Machines (v2) in South Central US may receive failure notifications when attempting to perform networking-related (such as network interfaces, NAT rules, or load balancing) update operations to existing Virtual Machine (v2) configurations. Engineers are aware of this issue and are in the process of implementing possible mitigation steps. All existing Virtual Machines are not impacted and customers can still perform standard service management operations (such as Start, Stop, Create, Delete) for them. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Multiple Services - North Europe - Service degradation.

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Virtual Machines, Azure Search and Web Apps in North Europe may experience intermittent connection failures when trying to access resources hosted in this region. Engineers have applied a mitigation, and most services should be seeing a return to healthy state at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - North Europe - Service degradation.

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Virtual Machines, Azure Search and Web Apps in North Europe may experience intermittent connection failures when trying to access resources hosted in this region. Engineers are aware of this issue which is caused by an underlying storage issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Storage in North Europe may experience difficulties connecting to resources hosted in this region. Retries may succeed for some customers. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed below. Engineers have applied a mitigation, and most services should be seeing a return to healthy state at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Storage in North Europe may experience difficulties connecting to resources hosted in this region. Retries may succeed for some customers. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed in the post below. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Web Apps - North Europe - Service degradation.

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Virtual Machines and Web Apps in North Europe may experience intermittent connection failures when trying to access resources hosted in this region. Engineers are aware of this issue which is caused by an underlying storage issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Storage in North Europe may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region, such as Virtual Machines and WebApp may also be experiencing impact related to this. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

An alert for Storage in North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - Central US

Starting at 05:33 UTC on 26 Nov 2016, a subset of customers using Virtual Machines in Central US may experience connection failures when trying to access Virtual Machines hosted in this region. Engineers are aware of this issue and are actively investigating root cause. Customers should start to see mitigation at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Central US

An alert for Virtual Machines in Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure Portal - Errors using Azure Resource Manager to create Virtual Machines

SUMMARY OF IMPACT: Between 16:15 UTC and 22:20 UTC on 21 Nov 2016, customers attempting to use Azure Resource Manager (ARM) to create Virtual Machines from within the Microsoft Azure portal (https://portal.azure.com) may have experienced an error message related to unavailable regions. Customers who wanted to use ARM to provision new Virtual Machine resources may do so by using PowerShell, Azure Command-Line Interface or REST APIs. Customers could still use ARM within the management portal to deploy other ARM enabled resources. PRELIMINARY ROOT CAUSE: Engineers discovered an issue with recent changes to the underlying code. MITIGATION: Engineers deployed a hotfix to resolve the issue, and ensured that the Virtual Machine deployment process returned to a healthy state. NEXT STEPS: Further investigate the deployment process and the software issue's underlying cause to prevent future occurrences. For customers still experiencing issue, we recommend customers access the Portal website via a private browsing session, clear the browser caches, try a different browser or using this link: https://portal.azure.com/?nocdn=force.

Last Update: A few months ago

Microsoft Azure Portal - Errors using Azure Resource Manager to create Virtual Machines

Customers attempting to use Azure Resource Manager (ARM) to create Virtual Machines from within the Microsoft Azure portal (https://portal.azure.com) may experience an error message related to unavailable regions. Customers who want to use ARM to provision new Virtual Machine resources may do so by using PowerShell, Azure Command-Line Interface or REST APIs. Customers can still use ARM within the management portal to deploy other ARM enabled resources. Engineers are seeing partial recovery and are working towards mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Errors using Azure Resource Manager to create Virtual Machines

Customers attempting to use Azure Resource Manager (ARM) to create Virtual Machines from within the Microsoft Azure portal (https://portal.azure.com) may experience an error message related to unavailable regions. Customers who want to use ARM to provision new Virtual Machine resources may do so by using PowerShell, Azure Command-Line Interface or REST APIs. Customers can still use ARM within the management portal to deploy other ARM enabled resources. Engineers are currently investigating this issue and taking steps to deploy a hotfix to mitigate this issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Errors using Azure Resource Manager to create Virtual Machines

Customers attempting to use Azure Resource Manager (ARM) to create Virtual Machines from within the Microsoft Azure portal (https://portal.azure.com) may experience an error message related to unavailable regions. Customers who want to use ARM to provision new Virtual Machine resources may do so by using PowerShell, Azure Command-Line Interface or REST APIs. Customers can still use ARM within the management portal to deploy other ARM enabled resources. Engineers are currently investigating this issue and taking steps to deploy a hotfix to mitigate this issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Errors using Azure Resource Manager to create Virtual Machines

An alert for Microsoft Azure Portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - East US

Starting at 03:20 UTC on 16 Nov 2016, a subset of customers using Virtual Machines in East US may experience connection failures when trying to access Virtual Machines hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US

Starting at 03:20 UTC on 16 Nov 2016, a subset of customers using Virtual Machines in East US may experience connection failures when trying to access Virtual Machines hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US

Starting at 03:20 UTC on 16 Nov 2016, a subset of customers using Virtual Machines in East US may experience connection failures when trying to access Virtual Machines hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - East US

An alert for Virtual Machines in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - East US

An alert for Virtual Machines in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure Portal - Difficulty viewing subscription information

Starting at approximately 19:00 UTC on 10 Nov 2016, customers may experience difficulties viewing newly created subscriptions in the Azure Portal (https://portal.azure.com). Viewing your subscription within the Azure Classic Portal (https://manage.windowsazure.com) or programmatically through Command line interfaces is not affected by this issue. Engineers are investigating a back end configuration issue as a preliminary root cause and are currently deploying mitigation steps. Customers should begin to see signs of improvement. The next update for this will be 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Difficulty viewing subscription information

Starting at approximately 19:00 UTC , customers may experience difficulties viewing newly created subscriptions in the Azure Portal (https://portal.azure.com). Viewing your subscription within the Azure Classic Portal (https://manage.windowsazure.com) or programmatically through Command line interfaces is not affected by this issue. Engineers are aware of the issue and are working towards a path to mitigation.

Last Update: A few months ago

Microsoft Azure Portal - Difficulty viewing subscription information

An alert for Microsoft Azure Portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services \ Build

Starting at 19:37 UTC on 09 Nov 2016, A subset of customer using Visual Studio Team Services \ Build in North Central US and South Central US may experience longer than usual build times. Customers are advised to not cancel any builds and attempt to resubmit these as cancelling and resubmitting will push the job to the back of the queue. More information can be found on http://aka.ms/vstsblog .Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML)

An alert for Visual Studio Team Services \ Build & Deployment/Build (XAML) in North Central US and South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Backup, Site Recovery and StorSimple - Multiple regions

An alert for Backup, Site Recovery and StorSimple in Multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - North Central US

Starting at 22:42 UTC on 27 Oct 2016, a very limited number customers using Virtual Machines in North Central US may experience connectivity failures or timeouts when their Virtual Machines are attempting to create outbound connections. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Central US

An alert for Virtual Machines in North Central US is being investigated. A subset of customers may be impacted. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - East Asia

Starting at 14:10 UTC on 26 Oct 2016, a subset of customers with services hosted in East Asia may experience degraded performance, latency, or time-outs when accessing their resources located in this region. Impacted Services include, but is not limited to, Virtual Machines, App Service \ Web Apps, Storage, Azure Search, and Service Bus. New service creation may also fail for customers. Some Virtual Machine customers will have experienced a reboot of their VMs in this regions. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - East Asia

An alert for Network Infrastructure in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure - East Asia

An alert for Virtual Machines, Network Infrastructure, App Service \ Web Apps, Storage, Service Bus and Azure Search in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - East Asia

An alert for Virtual Machines in East Asia is being investigated. More information will be provided as it is known .

Last Update: A few months ago

Virtual Machines - East Asia

An alert for Virtual Machines in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure Portal - Multiple Regions

Starting at 09:00 UTC on 20 Oct 2016, customers using Microsoft Azure Portal (portal.azure.com) may see the 'Resize', 'Diagnostics', 'Load Balancer' and 'Availability sets' buttons greyed out for Classic Virtual Machines. Deployed resources are not impacted. The options are still available through PowerShell and the Classic Portal (manage.windowsazure.com). Engineers have identified a potential underlying root cause and are taking mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Multiple Regions

Starting at 09:00 UTC on 20 Oct 2016, customers using Microsoft Azure Portal (portal.azure.com) may see the 'Resize', 'Diagnostics', 'Load Balancer' and 'Availability sets' buttons greyed out for Classic Virtual Machines. Deployed resources are not impacted. The options are still available through PowerShell and the Classic Portal (manage.windowsazure.com). Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Microsoft Azure Portal - Multiple Regions

An alert for Microsoft Azure portal in Multi-Region is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Engineers are aware of a recent issue for Visual Studio Team Services in Multiple Regions which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Microsoft Azure portal

Engineers have confirmed the intermittent issues accessing the Azure portal have been mitigated.

Last Update: A few months ago

Microsoft Azure portal

Engineers have confirmed the intermittent issues accessing the Azure portal have been mitigated.

Last Update: A few months ago

Microsoft Azure portal

An alert for Microsoft Azure portal is being investigated. Customers may encounter 503 errors when navigating to their portal. Engineers have identified a preliminary root cause and are working towards mitigation. More information will be provided as it is known.

Last Update: A few months ago

Microsoft Azure portal

An alert for Microsoft Azure portal is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment - North Central US and West Europe

An alert for Visual Studio Team Services \ Build & Deployment in North Central US and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

[Extended Recovery] Storage – East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services with dependencies on the affected Storage scale unit may also experience service degradation. These customers with dependencies on the affected Storage scale unit that are still impacted will be provided direct communication via their management portal (https://portal.azure.com). More nodes have been recovered and engineers are working on recovering the few nodes that are still affected. The next update will be provided as more information is made available.

Last Update: A few months ago

[Extended Recovery] Storage – East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services with dependencies on the affected Storage scale unit may also experience service degradation. These customers with dependencies on the affected Storage scale unit that are still impacted will be provided direct communication via their management portal (https://portal.azure.com). Engineers have applied mitigation steps and have observed a majority of the scale unit has recovered. The next update will be provided as more information is made available.

Last Update: A few months ago

[Extended Recovery] Storage – East Asia

Engineers continue to implement mitigation steps, and more customers have reported their services being restored. Service Bus, Site Recovery, Azure Backup, IoT Suite, Managed Cache, Radis Cache, Stream Analytics, HDInsight, Event Hub, and API Management have reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Extended Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully recovered.Media Services: Customers may experience time outs.HDInsight: Fully recovered.Site Recovery: Fully recovered.API Management: Fully recovered.SQL Database: Customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: Customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customers may experience timeouts or 500 errors.Service Bus: Fully recovered.Event Hub: Fully recovered.Stream Analytics: Fully recovered.Managed Cache and Redis Cache: Fully recovered.Azure Backup: Fully recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers continue to implement mitigation steps, and more customers have reported their services being restored. Service Bus, Site Recovery, Azure Backup, IoT Suite, Managed Cache, Radis Cache, and Stream Analytics have reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully recovered.Media Services: Customers may experience time outs.HDInsight: Customers may experience errors when creating new HDI clusters. Existing services are not impacted.Site Recovery: Fully recovered.API Management: Customers may experience service management operation errors via API calls or Portal.SQL Database: Customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: Customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customers may experience timeouts or 500 errors.Service Bus: Fully recovered.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Fully recovered.Managed Cache and Redis Cache: Fully recovered.Azure Backup: Fully recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers have begun mitigation and continue to see services improving, and some customers have reported their services being restored. Service Bus, Site Recovery, Azure Backup and IoT Suite reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully mitigated. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Fully recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: Fully mitigated. A subset of Azure Backup users with vaults in East Asia may encounter operation failures.The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia): Fully mitigated. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: Fully mitigated. A subset of Azure Backup users with vaults in East Asia may encounter operation failures. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state including Key Vault, Service Bus, Site Recovery, Azure Backup, and IoT Suite (only in East Asia). Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia): Validating mitigation IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: A subset of Azure Backup users with vaults in East Asia may encounter operation failures. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia)Media ServicesHDInsightSite RecoveryAPI ManagementSQL DBVirtual MachinesVirtual NetworkApp Service \ Web AppsService BusEvent HubStream AnalyticsManaged CacheRedis CacheAzure BackupKey VaultSQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Suite customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Site Recovery, Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers have begun mitigation and are seeing services are improving, and some customers have reported their services being restored. Service Bus and Site Recovery reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia)Media ServicesHDInsightSite RecoveryAPI ManagementSQL DBVirtual MachinesVirtual NetworkApp Service \ Web AppsService BusEvent HubStream AnalyticsManaged CacheRedis CacheAzure BackupKey Vault SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Site Recovery, Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers have begun mitigation and are seeing services are improving, and some customers have reported their services being restored. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, Azure Backup, Key Vault and IoT Hub only in the East Asia region. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Mitigation in progress] Storage – East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Mitigation in progress] - Multiple services impacted by underlying Storage incident - East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, Azure Backup, and IoT Hub. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Mitigation in progress] - Multiple services impacted by underlying Storage incident - East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, and IoT Hub. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided as soon as more information is available.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Some services are seeing improvement and engineers are continuing to work towards full mitigation for the region. The next update will be provided as soon as further information is available.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including Site Recovery, API Management, Stream Analytics, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have identified a potential root cause and are continuing to work through mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers have identified a potential root cause and are continuing to work towards mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, you have been identified as a customer who may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

[Validating Mitigation] SQL Database - West US

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Server and Database create, drop, rename and change edition or performance tier operations may not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. We recommend customers who are impacted and have geo-replication for their SQL Databases to failover. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ for more information on these capabilities. Engineers are currently validating mitigation steps that were applied. The next update will be provided as soon as more information is made available.

Last Update: A few months ago

[Validating Mitigation] Multiple Services Impacted due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management calls failing. Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Engineers are currently validating mitigation steps that were applied. The next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management calls failing. Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Engineers are currently validating mitigation steps that were applied. The next update will be provided as soon as more information is made available

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management calls failing. Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management calls failing. Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management falls failing. Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Media Services customers may experience failures with REST API's causing streaming or encoding issues. Engineering have performed a potential mitigation for Media Services and are in the process of validating recovery. Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may be experiencing downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management falls failing. . Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Media Services customers may experience failures with REST API's causing streaming or encoding issues. Engineering have performed a potential mitigation for Media Services and are in the process of validating recovery.  Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

SQL Database - West US

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Server and Database create, drop, rename and change edition or performance tier operations may not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. We recommend customers who are impacted and have geo-replication for their SQL Databases to failover. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ for more information on these capabilities. Engineers are currently investigating networking issues and mitigation options. The next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management falls failing. . Azure Data Catalog customers may experience a delay for newly published data to show up in the portal Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Media Services customers may experience failures with REST API's causing streaming or encoding issues. Engineering have performed a potential mitigation for Media Services and are in the process of validating recovery.  Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management falls failing. . Azure Data Catalog customers may experience a delay for newly published data to show up in the portal Newly published data will take longer to show up in the portal Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Media Services customers may experience failures with REST API's causing streaming or encoding issues. Engineering have performed a potential mitigation for Media Services and are in the process of validating recovery.  Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management falls failing. Media Services customers may experience failures with REST API's causing streaming or encoding issues. Azure Data Catalog customers may experience a delay for newly published data to show up in the portal. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services Impacted Due to SQL issue in West US

Starting at approximately 18:31 UTC on 30 Sep 2016, due to an issue with SQL Database in West US, customers may experience downstream impact to services that have a dependency on SQL in the region. WebApp customers may experience a slow response when restarting their WebApps and see API management falls failing. Media Services customers may experience failures with REST API's causing streaming or encoding issues. Azure Data Catalog customers may experience intermittent failures when attempting to perform search requests. Azure Data Movement services deployed in the region may also experience issues accessing monitoring  Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services - West US

The following services are impacted by a related SQL issue. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Web Apps deployed to the region may also experience latency and API Management failures. Customers using Media Services may experience failures with REST API's causing streaming or encoding issues. Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

SQL Database - West US

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Server and Database create, drop, rename and change edition or performance tier operations may not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. We recommend customers who are impacted and have geo-replication for their SQL Databases to failover. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ for more information on these capabilities.

Last Update: A few months ago

Multiple Services - West US

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Server and Database create, drop, rename and change edition or performance tier operations may not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. We recommend customers who are impacted and have geo-replication for their SQL Databases to failover. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ for more information on these capabilities. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Web Apps deployed to the region may also experience latency and API Management failures. Customers using Media Services may experience failures with REST API's causing streaming or encoding issues. Engineers are currently investigating mitigation steps and the next update will be provided as soon as more information is made available.

Last Update: A few months ago

Multiple Services - West US

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Server and Database create, drop, rename and change edition or performance tier operations may not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. We recommend customers who are impacted and have geo-replication for their SQL Databases to failover. Azure Data Movement services deployed in the region may also experience issues accessing monitoring. Web Apps deployed to the region may also experience latency and API Management failures. Engineers are currently investigating mitigation steps and the next update will be provided in 60 minutes.

Last Update: A few months ago

SQL Database - West US

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Server and Database create, drop, rename and change edition or performance tier operations may not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. Engineers are currently investigating and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Power BI Embedded - Multiple Regions

Starting at 04:19 UTC on 30 Sep 2016, a subset of customers using Power BI Embedded in multiple regions may receive '401 Unauthorized' errors when loading embedded reports. Some customers may be able to load reports after closing the error message. Engineers have identified a recent deployment as a potential underlying cause and are issuing a hotfix for mitigation. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

SQL Database - Multiple Regions

Starting at approximately 18:31 UTC on 30 Sep 2016, customers using SQL Database in the West US region may experience issues performing service management operations. Retrieving information about servers and databases through the Azure Management Portal may result in an error or timeout. Server and Database create, drop, rename and change edition or performance tier operations may also not complete successfully. Customers may see availability issues (connecting to and using existing databases), and retries may be successful. Engineers also received a monitoring alert for SQL Database in North Central US, North Europe, South Central US, Southeast Asia, West Europe. We have concluded our investigation of the alert and confirmed that all services are healthy. Engineers are currently investigating and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database - Multiple Regions

An alert for SQL Database in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Power BI Embedded - Multiple Regions

Starting at 09:21 UTC on 30 Sep 2016, a subset of customers using Power BI Embedded in multiple regions may receive 401 errors when loading embedded reports. Some customers may be able to load the reports after clicking 'Okay' on the error message. Engineers are aware of the issue and are investigating a potential underlying cause. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Power BI Embedded - Multiple Regions

Starting at 09:21 UTC on 30 Sep 2016, customers using Power BI Embedded in multiple regions may experience the inability to view embedded reports. Engineers are aware of the issue and are investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

HDInsight - East US

Starting at 20:29 UTC on 20 Sep 2016, customers using HDInsight in East US may experience cluster creation failures. Existing clusters in the region are not affected. Engineers have identified an underlying SQL Database issue as the preliminary root cause of the failures and are investigating steps for mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers may experience issues using the following services in Australia East. STREAM ANALYTICS AND MEDIA SERVICES: Users may experience intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) SQL DATABASES: Users may experience intermittent connection failures WEB APPS: Users may experience high latency LOGIC APPS: Users may experience failures creating, updating, running apps HDINSIGHT: Users may experience failures creating new clusters VISUAL STUDIO TEAM SERVICES: Users may experience failures when trying to access their services API MANAGEMENT: Users may experience management and availability issues AZURE SITE RECOVERY: Users may observe delays with VM replication from on-premises to Azure. Users may also experience delays with Azure Site Recovery operations from the portal IOT HUB: Users may experience issues provisioning new resources and other service management operations These issues are related to an ongoing Networking issue. Engineers are applying mitigation options. The next update will be provided as events warrant.

Last Update: A few months ago

Network Infrastructure - Australia East

Starting at 19:10 UTC on 20 Sept 2016, our engineers have identified a network issue affecting connectivity to some Azure Services in Australia East region. Some customers may experience intermittent connectivity issue to their services, or experience errors when performing Service Management operations. For detail impacted services, please refer to the secondary impact post. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers may experience issues using the following services in Australia East. STREAM ANALYTICS AND MEDIA SERVICES: Users may experience intermittent failures when performing Service Management operations (Create, Update, Delete, etc.). SQL DATABASES: Users may experience intermittent connection failures. WEB APPS: Users may experience high latency. LOGIC APPS: Users may experience failures creating, updating, running apps. HDINSIGHT: Users may experience failures creating new clusters. VISUAL STUDIO TEAM SERVICES: Users may experience failures when trying to access their services. API MANAGEMENT: Users may experience management and availability issues. AZURE SITE RECOVERY: Users may observe delays with VM replication from on-premises to Azure. Users may also experience delays with Azure Site Recovery operations from the portal. These issues are related to an ongoing Networking issue. Engineers are applying mitigation options. The next update will be provided as events warrant.

Last Update: A few months ago

Network Infrastructure - Australia East

Starting at 19:10 UTC on 20 Sept 2016, our engineers have identified a network issue affecting connectivity to some Azure Services in Australia East region. Some customers may experience intermittent connectivity issue to their services, or experience errors when performing Service Management operations. For detail impacted services, please refer to the secondary impact post.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers may experience issues using the following services in Australia East. STREAM ANALYTICS AND MEDIA SERVICES: Users may experience intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) SQL DATABASES: Users may experience intermittent connection failures WEB APPS: Users may experience high latency LOGIC APPS: Users may experience failures creating, updating, running apps HDINSIGHT: Users may experience failures creating new clusters VISUAL STUDIO TEAM SERVICES: Users may experience failures when trying to access their services These issues are related to an ongoing Networking issue. Engineers are applying mitigation options. The next update will be provided as events warrant.

Last Update: A few months ago

Network Infrastructure - Australia East

Starting at 19:10 UTC on 20 Sept 2016, our engineers have identified a network issue affecting connectivity to some Azure Services in Australia East region. Some customers may experience intermittent connectivity issue to their services, or experience errors when performing Service Management operations. For detail impacted services and customers experiences, please refer to the secondary impact post. More information will be provided as event warrants.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers may experience issues using the following services in Australia East. STREAM ANALYTICS AND MEDIA SERVICES: Users may experience intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) SQL DATABASES: Users may experience intermittent connection failures WEB APPS: Users may experience high latency LOGIC APPS: Users may experience failures creating, updating, running apps VISUAL STUDIO TEAM SERVICES: Users may experience failures when trying to access their services These issues are related to an ongoing Networking issue. Engineers are applying mitigation options. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers may experience issues using the following services in Australia East. STREAM ANALYTICS AND MEDIA SERVICES: Users may experience intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) SQL DATABASES: Users may experience intermittent connection failures WEB APPS: Users may experience high latency MANAGEMENT PORTAL: Users may be unable to sign in LOGIC APPS: Users may experience failures creating, updating, running apps VISUAL STUDIO TEAM SERVICES: Users may experience failures when trying to access their services These issues are related to an ongoing Networking issue. Engineers are applying mitigation options. The next update will be provided as events warrant.

Last Update: A few months ago

Network Infrastructure - Australia East

Starting at 19:10 UTC on 20 Sept 2016, our engineers have identified a network issue affecting connectivity to some Azure Services in Australia East region. Some customers may experience intermittent connectivity issue to their services, intermittently unable to login to Azure Portal, or experience errors when performing Service Management operations. For detail impacted services and customers experiences, please refer to the secondary impact post. More information will be provided as event warrants.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers may experience issues using the following services in Australia East. STREAM ANALYTICS AND MEDIA SERVICES: Users may experience intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) SQL DATABASES: Users may experience intermittent connection failures WEB APPS: Users may experience high latency MANAGEMENT PORTAL: Users may be unable to sign in VISUAL STUDIO TEAM SERVICES: Users may experience failures when trying to access their services These issues are related to an ongoing Networking issue. Engineers are applying mitigation options. The next update will be provided as events warrant.

Last Update: A few months ago

Network Infrastructure - Australia East

Our engineers are investigating a network issue affecting connectivity to some Azure Services in Australia East region. Some customers may experience intermittent connectivity issue to their services, or performing Service Management operations. For detail impacted services, please refer to the secondary impact post. More information will be provide as event warrants.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers using Stream Analytics and Media Services in Australia East may see intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) to their services deployed to the region. Customers using Visual Studio Team Services will also see failures when trying to access their services deployed in the region. Engineers are investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - Australia East

Starting at 19:10 UTC on 20 Sep, 2016 customers using Stream Analytics andMedia Services in Australia East may see intermittent failures when performing Service Management operations (Create, Update, Delete, etc.) to their services deployed to the region. Customers using Visual Studio Team Services will also see failures when trying to access their services deployed in the region. Engineers are investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Stream Analytics - Australia East

An alert for Stream Analytics in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Australia East

An alert for Visual Studio Team Services in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services -North Central US and South Central US

Summary of impact: Between 05:20 and 06:35 UTC on 19 Sep 2016 , you were identified as Visual Studio Team Services customer in North Central US or South Central US who may have experienced slowness and failures in queue times for your cloud load test. Preliminary root cause: The issue was caused by an underlying storage issue which impacted a Visual Studio backend server. Mitigation: Once the storage issue was mitigated, queue times returned to normal levels and the slowness was no longer present. Next steps: Engineers will continue to investigate the underlying root cause of this issue and develop a solution to prevent reoccurrences.

Last Update: A few months ago

SQL Database connection failures impacting multiple services in East US 2

Starting at 16:50 UTC on 12 Sep 2016, customers using SQL Database East US 2 may not be able to connect to or make changes to their SQL Databases. Due to a dependency on SQL Database other services may experience downstream impact. Engineers have applied mitigation and engineering are now validating full recovery, customers should observe their connectivity restoring. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Media Services impact due to previous SQL Database - Central US

Starting at 11:28 UTC on 15 Sep 2016, due to an previous incident with Azure SQL Database in Central US, Media Services customers in the region may be experiencing residual impact. A subset of customers using Media Services in Central US may observe issues when attempting to view purchased content in their Video catalog. Engineers are in the process of validating recovery and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services due to SQL Database - Central US

Starting at 11:28 on 15 Sep 2016, due to an ongoing incident with Azure SQL Database in Central US customers may observe downstream impact to their Azure services that have a dependency on Azure SQL Database in this region. A subset of customers using Media Services may observe impact when attempting to view purchased content in their Video catalog. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database - Central US

Starting at 11:28 on 15 Sep 2016, a subset of customers using Azure SQL Database in the Central US region will experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Engineers have applied mitigation and customers will start to observe new connections succeeding. Engineers will continue to validate recovery. The next update will be provided in 60 minutes or as events warrant

Last Update: A few months ago

SQL Database - Central US

Starting at 11:28 on 15 Sep 2016, a subset of customers using Azure SQL Database in the Central US region will experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Engineers have applied mitigation and customers will start to observe new connections succeeding. Engineers will continue to validate recovery. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database - Central US

Starting at 11:28 on 15 Sep 2016, a subset of customers using Azure SQL Database in the Central US region will experience connection failures to their SQL Database. Engineers have confirmed that existing connections remain unaffected and that only new connections may experience a timeout or an error. Engineers are investigating and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database - Central US

An alert for SQL Database in Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database - Central US

We have validated that the issues experienced by customers using SQL Database in Central US are mitigated. Our Engineering teams are working to gather additional details on the preliminary root cause before this incident is resolved. An update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database - Central US

We have validated that the issues experienced by customers using SQL Database in Central US are mitigated. Our Engineering teams are working to gather additional details on the preliminary root cause before this incident is resolved. An update will be provided within 30 minutes.

Last Update: A few months ago

SQL Database - Central US

Starting at 11:48 UTC 15 Sep, 2016 a subset of customers using using SQL Database in Central US may experience issues accessing their services in this region, as the result of an earlier DNS issue. Additional impact may also be experienced by customers using HDInsight and Media Services. Engineers have identified a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database, App Service \ Web Apps, Visual Studio Team Services, API Management and Service Bus - Degraded availability- Multiple Regions

Starting at 11:48 on 15 Sep 2016, a subset of customers using SQL Database, App Service \ Web Apps, API Management, Service Bus and Visual Studio Team Services in Multiple Regions may experience degraded service availability due to an ongoing networking incident. Engineers are actively reviewing mitigation options and the next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database, App Service \ Web Apps, Visual Studio Team Services, API Management and Service Bus - Degraded availability- Multiple Regions

Starting at 11:48 on 15 Sep 2016, a subset of customers using SQL Database, App Service \ Web Apps, API Management, Service Bus and Visual Studio Team Services in Multiple Regions may experience degraded service availability due to an ongoing networking incident. Engineers are actively reviewing mitigation options and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database, App Service \ Web Apps, Visual Studio Team Services, API Management and Service Bus - Degraded availability- Multiple Regions

Starting at 11:48 on 15 Sep 2016, a subset of customers using SQL Database, App Service \ Web Apps, API Management and Service Bus in Australia Southeast, Central US and West Europe and Visual Studio Team Services in West Europe may experience degraded service availability due to an ongoing networking incident. Engineers are actively reviewing mitigation options and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

DNS - Multi-Region

Starting at 11:48 UTC 15 Sep, 2016 a subset of customers using DNS in multiple regions may experience difficulties connecting to their resources hosted in this region. This issue is also having knock-on impact on multiple Azure services, including SQL Database, Virtual Machines, Visual Studio Team Services, Service Bus, API Management and App Service \ Web Apps. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

DNS - Multi-Region

Starting at 11:48 UTC 15 Sep, 2016 a subset of customers using DNS in multiple regions may experience difficulties connecting to their resources hosted in this region. This issue is also having knock-on impact on impact on multiple Azure services, including SQL Database, Virtual Machines, Visual Studio Team Services, and App Service \ Web Apps. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

DNS - Multi-Region

An alert for DNS in Multi-Region is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Access Control Service - South Central US and West Europe

Starting at 00:23 UTC on 15 Sep, 2016 customers using Access Control Service in South Central US and West Europe may experience increased latency or 500 errors when attempting to use their services. Engineers have identified a root cause and are working towards mitigation. Customers in South Central US may begin to see signs of recovery. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Access Control Service - South Central US and West Europe

An alert for Access Control Service in South Central US and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (Hosted) - West Europe

Starting at 20:26 UTC on 14 Sep 2016, customers using Visual Studio Team Services \ Build & Deployment/Build (Hosted) in West Europe may experience longer than usual build queue times. Customers are advised to not cancel any builds and attempt to resubmit these as cancelling the job and resubmitting will push the job to the back of the queue. More information at http://aka.ms/VSTSBlog. Engineers are investigating and the next update will be provided in 60 minutes.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (Hosted) - West Europe

An alert for Visual Studio Team Services \ Build & Deployment/Build (Hosted) in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Services are operating as expected

Services are operating as expected

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML) - North Central US

Starting at 20:25 UTC on 13 Sep 2016, customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in North Central US may experience longer than usual build queue times. Customers are advised to not cancel any builds and attempt to resubmit these as cancelling the job and resubmitting will push the job to the back of the queue. More information at http://aka.ms/VSTSBlog. Engineers are investigating and the next update will be provided in 60 minutes.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML) - North Central US

An alert for Visual Studio Team Services \ Build & Deployment/Build (XAML) in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

SQL Database connection failures impacting multiple services in East US 2

Starting at 16:50 UTC on 12 Sep 2016, customers using SQL Database East US 2 may not be able to connect to or make changes to their SQL Databases. Due to a dependency on SQL Database other services may experience downstream impact. Engineers have applied mitigation and engineering are now validating full recovery, customers should observe their connectivity restoring. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services with a dependency on SQL Database in East US 2

Starting at 16:50 UTC on 12 Sep 2016, due to an incident relating to SQL Database in East US 2, additional Azure services may be experiencing downstream impact. Customers using HDInsight, Service Bus and Web App in East US 2 may experience intermittent connectivity to their service. Data Factory customers in East US may experience intermittent connection failures. In addition customers may be unable to perform any service management operations hosted in this region. Engineering have applied mitigation and customers should be seeing recovery. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services with a dependency on SQL Database in East US 2

Starting at 16:50 UTC on 12 Sep 2016, due to an incident relating to SQL Database in East US 2, additional Azure services may be experiencing downstream impact. Customers using HDInsight, Service Bus and Web App in East US 2 may experience intermittent connectivity to their service. Data Factory customers in East US may experience intermittent connection failures. In addition customers may be unable to perform any service management operations hosted in this region. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services with a dependency on SQL Database in East US 2

Starting at 16:50 UTC on 12 Sep 2016, due to an incident relating to SQL Database in East US 2, additional Azure services may be experiencing downstream impact. Customers using HDInsight, or Service Bus in East US 2 may experience intermittent connectivity to their services, in addition customers may be unable to perform any service management operations hosted in this region. Web App customers may experience intermittent API failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services with a dependency on SQL Database in East US 2

Starting at 16:50 UTC on 12 Sep 2016, due to an incident relating to SQL Database in East US 2, additional Azure services may be experiencing downstream impact. Customers using HDInsight, Data Factory or Service Bus in East US 2 may experience intermittent connectivity to their services, in addition customers may be unable to perform any service management operations hosted in this region. Web App customers may experience intermittent API failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database connection failures impacting multiple services in East US 2

Starting at 16:50 UTC on 12 Sep 2016, customers using SQL Database East US 2 may not be able to connect to or make changes to their SQL Databases. Due to a dependency on SQL Database other services may experience downstream impact. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services with a dependency on SQL Database in East US 2

Starting at 16:50 UTC on 12 Sep 2016, due to an incident relating to SQL Database, other Azure services may be experiencing downstream impact. Customers using HDInsight or Service Bus in East US 2 may be unable to connect to or perform any service management operations on their services hosted in this region. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Impacted Services with a dependency on SQL Database in East US 2

Starting at 16:50 UTC on 12 Sep 2016, customers using SQL Database East US 2 may not be able to connect to or make changes to their SQL Databases. Due to a dependency on SQL Database other services may experience downstream impact. A list of impacted services is listed below: HDInsights and Service Bus

Last Update: A few months ago

SQL Database - East US 2

Starting at 16:50 UTC on 12 Sep 2016, customers using SQL Database or HD Insights in East US 2 may not be able to connect to their SQL Database or HD Insights services. Service management operations such as create will also be affected. Engineers are currently investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

SQL Database - East US 2

An alert for SQL Database in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML) - East US and South Central US

Starting at 15:00 UTC on 09 Sep, 2016, customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in South Central US and East US will experience longer than usual build queue times. More information at http://aka.ms/VSTSBlog. Engineers are manually rebooting backend processes in order to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML) - East US and South Central US

Starting at 15:00 UTC on 09 Sep, 2016, customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in South Central US and East US will experience longer than usual build queue times. More information at http://aka.ms/VSTSBlog. Engineers are investigating for a potential underlying root cause and are implementing mitigation steps. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML) - East US and South Central US

An alert for Visual Studio Team Services \ Build & Deployment/Build (XAML) in East US and South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - West Europe

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may have experienced connectivity issues when attempting to connect to their resources.  Multiple Services in the region were impacted. At this time: Visual Studio Team Services remain impacted. A subset of Visual Studio Team Services customers will be unable to view some of their Custom Extensions and Tasks once they are logged into their portal. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - West Europe

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources.  Multiple Services in the region were impacted. At this time: Visual Studio Team Services remain impacted. Visual Studio Team Services customers will be unable to view Custom Extensions and Custom Tasks in their portal. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - West Europe

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region were impacted. At this time: Visual Studio Team Services remain impacted. Engineers have identified an underlying network issue and have made a configuration change to resolve network health. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

We have validated that the issues experienced by customers using App Service \ Web Apps, Visual Studio Team Services, Virtual Machines, SQL Database, Service Bus, Redis Cache, Media Services, HDInsight, DocumentDB, Data Catalog, Cloud Services, Azure Search and Automation in North Europe and West Europe are mitigated. . Our Engineering teams are working to gather additional details on the preliminary root cause before this incident is resolved. An update will be provided within 30 minutes.

Last Update: A few months ago

Service Bus - West India

Starting at 15:42 UTC on 09 Sep 2016, Engineers have confirmed that Service Bus in West India was not impacted by any service outage. We have concluded our investigation and confirmed that all services are healthy, and a service incident did not occur.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region are impacted. These services are showing recovered: SQL Database, Azure Automation, Azure Data Factory, Service Bus, Event Hubs. The following services are still impacted at this time: Visual Studio Team Services, App Service \ Web Apps, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Azure Search, Log Analytics and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region are impacted. These services are showing recovered: SQL Database, Azure Automation, Azure Data Factory, Service Bus, Event Hubs. The following services are still impacted at this time: Visual Studio Team Services, App Service \ Web Apps, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Azure Search, Log Analytics and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe may experience connectivity issues when attempting to connect to their resources. Impacted services include. Visual Studio Team Services, App Service \ Web Apps and SQL Database, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Service Bus , Azure Search, and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe may experience connectivity issues when attempting to connect to their resources. Impacted services include. Visual Studio Team Services, App Service \ Web Apps and SQL Database, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Service Bus , Azure Search, and Document DB. Engineers identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers using Visual Studio Team Services, App Service \ Web Apps and SQL Database in West Europe and Virtual Machines, Cloud Services, , HD Insight , Redis Cache , Service Bus , Azure Search, Document DB  and Data Factory may experience degraded availability when accessing their resources. Engineers identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers using Visual Studio Team Services, App Service \ Web Apps and SQL Database in West Europe and Virtual Machines, Cloud Services, , HD Insight , Redis Cache , Service Bus , Azure Search, Document DB  in North Europe and West Europe will experience degraded availability when accessing their resources. Engineers are currently investigating and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

An alert for Visual Studio Team Services, Virtual Machines, Cloud Services, App Service \ Web Apps and SQL Database in West Europe and North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Emerging issue under investigation

Engineers are investigating alerts in East US 2, North Europe and West Europe. Additional information will be provided shortly.

Last Update: A few months ago

Site Recovery - North Europe and West Europe

Starting as early as 15:03 UTC on 07 Sep 2016, customers in West Europe and North Europe may experience intermittent issues loading Site Recovery tiles in the Azure Management Portal (portal.azure.com), resulting in a cloud with a rain drop. Engineers have identified a potential underlying root cause and are currently taking steps to mitigate the issue. Some customers are now reporting success with loading the tiles. Customers may also programmatically access these through command line or rest API interfaces as a workaround for this operation. The next update will be provided in 3 hours or as events warrant.

Last Update: A few months ago

Site Recovery - North Europe and West Europe

Starting as early as 21:30 UTC on 07 Sep 2016, customers in West Europe and North Europe may experience intermittent issues loading Site Recovery tiles in the Azure Management Portal (portal.azure.com), resulting in a cloud with a rain drop. Engineers have identified a potential underlying root cause and are currently investigating a path to mitigation steps. Some customers are now reporting success with loading the tiles. Customers may also programmatically access these through command line or rest API interfaces as a workaround for this operation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Site Recovery - North Europe and West Europe

Starting as early as 21:30 UTC on 07 Sep 2016, customers in West Europe and North Europe may experience intermittent issues loading Site Recovery tiles in the Azure Resource Management Portal, resulting in a cloud with a rain drop. Engineers are currently investigating mitigation steps. Customers may programmatically access these through command line or rest API interfaces as a workaround for this operation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Site Recovery - North Europe and West Europe

Starting as early as 21:30 UTC on 07 Sep 2016, customers in West Europe and North Europe may experience an error loading tile with a cloud and raindrop in the Azure Resource Management Portal while attempting to access their Site Recovery tiles. Engineers are investigating for an underlying root cause and a path to mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Site Recovery - North Europe and West Europe

An alert for Site Recovery in North Europe and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 09:00 UTC on 07 Sep 2016, customers may experience longer than expected latency with their Virtual Machine performance in West Europe. Engineers have identified a potential root cause and are evaluating mitigation options. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 09:00 UTC on 07 Sep 2016, customers may experience longer than expected latency with their Virtual Machine performance in West Europe. Engineers are investigating the possible root cause and engaging multiple component teams to drive mitigation. Next update will be in 60 minutes or as events warrants.

Last Update: A few months ago

Virtual Machines - West Europe

Starting at approximately 09:00 UTC on 07 Sep 2016, customers may experience longer than expected latency with their Virtual Machine performance in West Europe. Engineers are investigating a potential root cause. Next update will be provided in 60 minutes or as events warrants.

Last Update: A few months ago

Virtual Machines - West Europe

An alert for Virtual Machines in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 19:50 UTC on 6 Sept 2016, customers using Visual Studio Team Services in Multiple Regions may experience HTTP 500 error when attempting to access to their accounts. Engineers are currently engaged and investigating the issue. The next update will be provided in 60 minutes. More information can be found at aka.ms/vstsblog

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

An alert for Visual Studio Team Services in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - West Europe

Starting at 07:48 UTC on 05 Sept 2016, a subset of customers using Visual Studio Team Services in West Europe may experience degraded performance while navigating their Visual Studio Team Services accounts. Engineers are actively investigating this issue and working towards mitigation. More details can be found on http://aka.ms/VSTSBlog. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - West Europe

Starting at 07:48 UTC on 05 Sep 2016, a subset of customers using Visual Studio Team Services in West Europe may experience degraded performance while navigating their Visual Studio Team Services accounts. Engineers are currently investigating. More details can be found on http://aka.ms/VSTSBlog. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 11:41 UTC on 1 Sep 2016, customers using Visual Studio Team Services in Multiple Regions may experience timeout errors or failures in their Hosted Build requests. Engineers have identified a recent deployment causing a backend resource to enter an unhealthy state as a potential root cause. Engineers are still in a mitigation state and confirming if the patches applied has resolved the issue. At this time newly submitted builds may not see timeout errors or latency. Engineers recommend not to resubmit any current builds as this will move the builds to the back of the queue. More information is available at http://aka.ms/VSTSBlog . The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 11:41 UTC on 1 Sep 2016, customers using Visual Studio Team Services in Multiple Regions may experience timeout errors or failures in their Hosted Build requests. Engineers have identified a recent deployment causing a backend resource to enter an unhealthy state as a potential root cause. Engineers are applying software patches for mitigation. As mitigation is being applied newly submitted builds may not see timeout errors or latency. Engineers recommend not to resubmit builds as this will move the build to the back of the queue. More information is available at http://aka.ms/VSTSBlog . The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 11:41 UTC on 1 Sep 2016, customers using Visual Studio Team Services in Multiple Regions may experience timeout errors or failures in their Hosted Build requests. Engineers have discovered a potential root cause and are working towards mitigation. More information is available at http://aka.ms/VSTSBlog The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 11:41 UTC on 1 Sep 2016, customers using Visual Studio Team Services in Multiple Regions may experience timeout errors or failures in their Hosted Build requests. Engineers are currently investigating to determine a root cause. More information is available at http://aka.ms/VSTSBlog The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

An alert for Visual Studio Team Services in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

SUMMARY OF IMPACT: Between 20:09 UTC on 31 Aug 2016 and 02:21 UTC on 01 Sep 2016, customers using Visual Studio Team Services in Multiple Regions may have experienced timeout errors or failures in their Hosted Build requests. PRELIMINARY ROOT CAUSE: An updated agent in Hosted Build was failing to start, causing builds to get stuck in the queue. MITIGATION: Engineers implemented a software configuration update to mitigate the issue. NEXT STEPS: Engineers will continue to monitor the situation for stability. Further steps will be taken to prevent future recurrences.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 20:09 UTC on 31 Aug 2016, customers using Visual Studio Team Services in Multiple Regions may experience timeout errors or failures in their Hosted Build requests. Engineers are continuing to investigate preliminary root cause to mitigate the issue. More information is available at http://aka.ms/VSTSBlog The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Starting at 20:09 UTC on 31 Aug 2016, customers using Visual Studio Team Services in Multiple Regions may have experienced timeout errors or failures in their Hosted Build requests. Engineers are further investigating to determine a preliminary root cause and mitigate the issue. More information is available at http://aka.ms/VSTSBlog The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services - North Central US, South Central US and West Europe

An alert for Visual Studio Team Services in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Visual Studio Team Services - Multiple Regions

Engineers are aware of a recent issue for Visual Studio Team Services in Multiple Regions which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services \ Build & Deployment/Build (XAML) - Multiple Regions

An alert for Visual Studio Team Services \ Build & Deployment/Build (XAML) in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Data Lake Analytics and Data Lake Store - East US 2

Starting as early as 17:24 UTC on 31 Aug 2016, customers in East US 2 may experience service management (create, delete, etc.) and connectivity issues while using their Data Lake Analytics and Data Lake Store services. Engineers are currently investigating a preliminary root cause for this issue in order to mitigate. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Data Lake Analytics and Data Lake Store - East US 2

An alert for Data Lake Analytics and Data Lake Store in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

DocumentDB -West US

Engineers are aware of a recent issue for DocumentDB in West US which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Application Insights - Multiple Regions

Starting at 17:50 UTC 29 Aug, 2016 customers using Visual Studio Application Insights in multiple regions may be unable to query analytic search data or may experience failures when retrieving Application Insight data in the classic and new Azure Management Portals. Engineers are aware of this issue and are actively investigating. For more information navigate to the Application Insights Service Blog http://aka.ms/AppInsBlog . The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Visual Studio Application Insights - Multiple Regions

An alert for Visual Studio Application Insights in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

StorSimple - Multiple Regions

Starting at 16:49 UTC on 26 Aug 2016, customers using StorSimple in Multiple Regions may experience latency when attempting to load their usage monitoring graphs. Customers registering new devices into StorSimple will also experience difficulty viewing usage monitoring graphs for these devices. Engineers have performed mitigation steps and are validating recovery. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

StorSimple - Multiple Regions

Starting at 16:49 UTC on 26 Aug 2016, customers using StorSimple in Multiple Regions may experience latency when attempting to load their usage monitoring graphs. Customers registering new devices into StorSimple will also experience difficulty viewing usage monitoring graphs for these devices. Engineers are currently performing mitigation steps. The next update will be provided in 2 hours.

Last Update: A few months ago

StorSimple - Multiple Regions

Starting at 16:49 UTC on 26 Aug 2016, customers using StorSimple in Multiple Regions may experience latency when attempting to load their usage monitoring graphs. Customers registering new devices into StorSimple will also experience difficulty viewing usage monitoring graphs for these devices. Engineers are currently performing mitigation steps. The next update will be provided in 2 hours.

Last Update: A few months ago

StorSimple - Multiple Regions

Starting at 16:49 UTC on 26 Aug 2016, customers using StorSimple in Multiple Regions may experience latency when attempting to load their usage monitoring graphs. Customers registering new devices into StorSimple will also experience difficulty viewing usage monitoring graphs for these devices. Engineers are further investigating a backend process and are continuing to work towards a path to mitigation. The next update will be provided in 60 minutes.

Last Update: A few months ago

© 2019 - Cloudstatus built by jameskenny