Windows Azure Event Hubs Status

Management Operations West Europe/North Europe - Investigating

Starting at 06:20 UTC on 23 Aug 2019 a subset of customers with resources hosted in West and/or North Europemay receive failure notifications when performing service management operations - such as create, update, delete -for resources. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

UK South and East US - Applying Mitigation

Starting at approximately 16:00 UTC on 29 Mar 2019, a a subset of customers may experience difficulties connecting to Azure resources in the East US and UK South regions. Some customers may also experience failures when attempting service management operations. Current status: Engineers are actively implementing multiple mitigation strategies. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

SQL Database - UK South and East US - Investigating

Starting at approximately 16:00 UTC on 29 Mar 2019, a a subset of customers may experience difficulties connecting to Azure resources in the East US and UK South regions. Some customers may experience issues when attempting service management operations for App Service deployments in the UK South region. Current status: Engineers are aware of an underlying SQL issue as the potential root cause and are pursuing multiple mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South - Investigating

Starting at 13:19 UTC on 10 Jan 2019, a subset of customers leveraging Storage in UK South may experience service intermittent availability issues. In addition, resources with dependencies on Storage, may also experience downstream impact in the form of availability issues. Engineers have identified the underlying root cause as an unhealthy storage stamp and are working to determine the mitigation path.The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - South Central US - Investigating

Starting at 09:29 UTC on 04 Sep 2018, customers in South Central US may experience difficulties connecting to resources hosted in this region. Engineers have isolated an issue with cooling in one part of the data center, which caused a localized spike in temperature, as the preliminary root-cause, which has now been mitigated. Automated data center procedures to ensure data and hardware integrity went into effect when temperatures hit a specified threshold and critical hardware entered a structured power down process. Engineers are now in the process of restoring power to affected devices as part of the ongoing mitigation process.Some services may also be experiencing intermittent authentication issues due to downstream Azure Active Directory impact, and engineers are separately working on mitigation options for this also.The next update will be provided at 15:00 UTC or as events warrant.

Last Update: A few months ago

Storage - North Central US

Customer Impact: Starting at 17:52 UTC on 31 Jul 2018 a subset of customers using Storage in North Central US may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard.Current Status: Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Engineers are investigating a power event as a potential root cause and are seeing signs of recovery.Next Update: will be published at 20:10 UTC on July 31th 2018.

Last Update: A few months ago

Azure Service Management Failures - Resolved

SUMMARY OF IMPACT: Between 22:15 on 26 Jun 2018 and 06:20 UTC on 27 Jun 2018, a subset of customers may have experienced timeouts or failures when attempting to perform service management operations on their API Management, App Service, Microsoft Stream, Media Services, Azure SQL Database, Azure Search, Azure Active Directory B2C, Azure IoT hubs, Azure Batch, Event Hubs and Service Bus services in Azure. In addition, some customers may have experienced connection failures to the Azure Portal. Some services with a reliance on triggers from service management calls may have seen failures for running instances.PRELIMINARY ROOT CAUSE: Engineers identified a service management API code configuration that impacted background services. This was causing service management requests to fail for a subset of customers.MITIGATION: Engineers performed a rollback of the recent deployment task to mitigate the issue.NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:44 UTC on 19 Jun 2018 a subset of customers using Virtual Machines, Storage, Key Vault, App Service, Site Recovery, Automation, Service Bus, Event Hubs, Data Factory, or Logic Apps in North Europe may experience connection failures when trying to access resources hosted in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - South Central US

Starting at 15:57 UTC on 13 Jun 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage availability issue which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this. These service may include: Virtual Machines, App Service, Visual Studio Team Services, Logic Apps, Azure Backup, Application Insights, Service Bus, Event Hub, and Site Recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services impacted in West Europe

Engineers are investigating an emerging issue including Storage and Virtual Machines in the West Europe region. In addition, Azure Services that have dependency on Storage may experience issue connecting to their resources. Confirmed impacted services are: Storage., Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps, Automation.

Last Update: A few months ago

Multiple Azure Services impacted in West Europe

Engineers are investigating an emerging issue including Storage and Virtual Machines in the West Europe region. In addition, Azure Services that have dependency on Storage may experience issue connecting to their resources. Confirmed impacted services are: Storage., Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps.

Last Update: A few months ago

Service Bus - Australia East

Starting at 04:00 UTC on 13 Dec 2017, a limited subset of customers using Service Bus and Event Hubs in Australia East may experience intermittent issues when connecting to resources from the Azure Management Portal or programmatically. Services offered within Service Bus, including Azure Service Bus Queue and Service Bus Topics may also be affected. Engineers continue to apply mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are also aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying network infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers continue investigating possible underlying causes, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US and South Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are currently investigating previous updates and deployments to the region along with other possible network level issues, and are taking additional steps to mitigate impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, Redis Cache, Logic Apps, Azure Analysis Services, and Azure Resource Manager. Engineers have verified that a majority of impacted services are mitigated and are conducting final steps of validation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, and Redis Cache. Mitigation has been applied and our monitoring system has started showing recovery. Engineers continue to validate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, and Azure Search. Multiple engineering teams are engaged in multiple workflows to mitigate the impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs – seeing signs of recovery Log Analytics Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Backup Event Hubs Redis Cache Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources. SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments.  Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources.  SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Multiple services | Japan East

SUMMARY OF IMPACT: Between 13:50 and 19:50 UTC on 31 Mar 2017, a subset of customers in Japan East experienced Virtual Machine reboots, degraded performance, or connection failures when accessing their Azure resources hosted in Japan West region. Engineers have confirmed the following services are healthy: Redis Cache, Service Bus, Azure SQL Database, Event Hubs, Automation, Steam Analytics, Document DB, Data Factory / Data Movement, Azure Monitor, Media Services, Logic Apps, Azure IoT Hub, API Management, Azure Resource Manager, Azure Machine Learning. NEXT STEPS: A detailed root cause analysis report will be provided in approximately 72 hours

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, Azure IoT Hub, Azure Monitor, and Azure Automation. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, Azure IoT Hub, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, and HDInsight. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, and HDInsight. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017 to 04:54 on 16 Mar 2017, due to a incident in East US affecting Storage, customers using Storage and service depending on Storage may have experienced difficulties accessing their resources in the region. Engineering have confirmed that Azure Logic Apps Azure SQL Database have now recovered.  PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable.  NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 2 hours or as any new information is made available

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Virtual Machines or Cloud Services customers may experience failures when attempting to provision resources. Azure Search customers may be unable to create, scale, or delete services. Azure Monitor customers may be unable to turn on diagnostic settings for resources. Azure Site Recovery customers may experience replication failures. API Management 'service activation' in South India will experience a failure. Azure Batch customers will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. EventHub customers using a service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, Cloud Services, Storage, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: SQL Database, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, DocumentDB, Azure DevTest Labs, Service Bus, and Event Hub, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services, the following services have confirmed mitigation: Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

[Extended Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully recovered.Media Services: Customers may experience time outs.HDInsight: Fully recovered.Site Recovery: Fully recovered.API Management: Fully recovered.SQL Database: Customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: Customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customers may experience timeouts or 500 errors.Service Bus: Fully recovered.Event Hub: Fully recovered.Stream Analytics: Fully recovered.Managed Cache and Redis Cache: Fully recovered.Azure Backup: Fully recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully recovered.Media Services: Customers may experience time outs.HDInsight: Customers may experience errors when creating new HDI clusters. Existing services are not impacted.Site Recovery: Fully recovered.API Management: Customers may experience service management operation errors via API calls or Portal.SQL Database: Customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: Customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customers may experience timeouts or 500 errors.Service Bus: Fully recovered.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Fully recovered.Managed Cache and Redis Cache: Fully recovered.Azure Backup: Fully recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully mitigated. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Fully recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: Fully mitigated. A subset of Azure Backup users with vaults in East Asia may encounter operation failures.The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia): Fully mitigated. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: Fully mitigated. A subset of Azure Backup users with vaults in East Asia may encounter operation failures. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state including Key Vault, Service Bus, Site Recovery, Azure Backup, and IoT Suite (only in East Asia). Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia): Validating mitigation IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: A subset of Azure Backup users with vaults in East Asia may encounter operation failures. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia)Media ServicesHDInsightSite RecoveryAPI ManagementSQL DBVirtual MachinesVirtual NetworkApp Service \ Web AppsService BusEvent HubStream AnalyticsManaged CacheRedis CacheAzure BackupKey VaultSQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Suite customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Site Recovery, Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia)Media ServicesHDInsightSite RecoveryAPI ManagementSQL DBVirtual MachinesVirtual NetworkApp Service \ Web AppsService BusEvent HubStream AnalyticsManaged CacheRedis CacheAzure BackupKey Vault SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Site Recovery, Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, Azure Backup, Key Vault and IoT Hub only in the East Asia region. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Mitigation in progress] - Multiple services impacted by underlying Storage incident - East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, Azure Backup, and IoT Hub. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Mitigation in progress] - Multiple services impacted by underlying Storage incident - East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, and IoT Hub. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided as soon as more information is available.

Last Update: A few months ago

Service Bus & Event Hubs - Multiple Regions - Advisory

Starting at 21:09 UTC on 23 June, 2016 a very small number of customers using Service Bus and Event Hub in Multiple Regions may experience issues when attempting to create new namespaces. We are resolving this incident due to the limited nature of the remaining impact. Customers experiencing continued issues will receive updates via the Azure Management Portal (portal.azure.com).

Last Update: A few months ago

Service Bus & Event Hubs - Multiple Regions - Advisory

Alerts for Service Bus and Event Hubs in Multiple Regions are being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , recovered and all services, except a limited subset of Web Apps, have reported recovery as well. The next update will be provided in 2 hours or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are actively mitigating the Web Apps remaining impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers are showing signs of recovery and some customers may continue to experience errors attempting to connect to resources. Web Apps customers are showing signs of recovery and a subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Redis Cache, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Managed Cache, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics, SQL Databases, Storage, HDInsight.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs

Last Update: A few months ago

Network Infrastructure - Southeast Asia: Service Restoration

Starting at approximately 01:40 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Investigations indicate this to be due to fiber cut on a 3rd party network provider's infrastructure. Engineers from the 3rd party network provider have repaired the issue. Azure Services are starting to show restoration. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Please refer to History page for preliminary report for the Networking incident. All the impacted Azure Services also reported services restored, except App Services and HDInsights. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Event Hubs and Service Bus - East US - Advisory

Starting approximately around 19:45 March 4th 2016 UTC, Engineers have detected alerts in East US where a subset of Event Hubs customers may experience unavailability when attempting to access their Event Hub resources. In addition, a subset of Service Bus customers may also experience intermittent timeouts when accessing their Queues or Topics. The preliminary investigation shows this incident may impact around 30% of customers in this regions. Engineers are actively investigating the issue now. Next update will be provided within 60 minutes or as event warrants.

Last Update: A few months ago

Event Hubs - East US - Advisory

Starting approximately around 19:45 March 4th 2016 UTC, Engineers have detected alerts in East US where a subset of Event Hubs customers may experience unavailability when attempting to access their Event Hub resources. In addition, a subset of Service Bus customers may also experience intermittent timeouts when accessing their Queues or Topics. The preliminary investigation shows this incident may impact around 30% of customers in this regions. Engineers are actively investigating the issue now, and next update will be provided within 60 minutes or as event warrants.

Last Update: A few months ago

Event Hubs - East US - Advisory

Starting approximately around 19:45 March 4th 2016 UTC, Engineers have detected alerts in East US where a subset of Event Hubs customers may experience unavailability when attempting to access their Event Hubs resources. The preliminary investigation shows the incident may impact around 30% of customers in this regions. Engineers are actively investigating the issue now, and next update will be provided within 60 minutes or as event warrants.

Last Update: A few months ago

Event Hubs - East US - Advisory

An alert for Event Hubs in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

HD Insight, Event Hubs, App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with HD Insight, Event Hubs, Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers have deployed a mitigation for the Storage issue and customers may begin to see improvements. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs, App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Event Hubs, Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers have deployed a mitigation for the Storage issue and customers may begin to see improvements. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs, App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Event Hubs, Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers are currently working to mitigate the Storage issue. The next update will be provided in 60 minutes.

Last Update: A few months ago

App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers are currently working to mitigate the Storage issue. The next update will be provided in 60 minutes.

Last Update: A few months ago

Service Bus and Event Hubs - East US and West US - Advisory

An alert for Service Bus and Event Hubs in East US and West US is being investigated. A subset of customers may be impacted. More information will be provided as it is known.

Last Update: A few months ago

Additionally impacted services - West Europe

Multiple services in Europe are impacted by an ongoing Networking issue. Impacted services include: App Service \ Web App, API Management, Stream Analytics, Azure Search, Event Hubs, Service Bus, SQL Database, Operational Insights, Azure Active Directory B2C, Key Vault, Media Services, Data Catalog, Virtual Machines, Automation, Visual Studio Online, Managed Cache, Redis Cache, DocumentDB and RemoteApp. More information will be provided as it is known.

Last Update: A few months ago

Additionally impacted services - West Europe

Multiple services in Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known .

Last Update: A few months ago

Additionally impacted services - West and North Europe

Multiple services in West and North Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known .

Last Update: A few months ago

Aditionally impacted services - West Europe

Multiple services in West Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known.

Last Update: A few months ago

Services Experiencing Residual Impact - East US 2 - Advisory

Starting at approximately 16:40 UTC on 11 Nov, 2015 customers using API Management, Event Hubs, Media Services (including Encoding & Live\On-Demand Streaming), Data Lake (Store and Analytics), HDInsight, Stream Analytics, Azure Active Directory, Operational Insights, App Services \ Web Apps, Compute Services (IaaS & PaaS), SQL Database, Service Bus, Visual Studio Online, VSO Build, Load Test and Application Insights in East US 2 may experience issues accessing services as it is related to the Network and Storage incident. Engineers are working to restore services at this time. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Services Experiencing Residual Impact - East US 2 - Advisory

Starting at approximately 16:40 UTC on 11 Nov, 2015 customers using API Management, Event Hubs, Stream Analytics, Media Services (including Encoding & Live\On-Demand Streaming), Data Lake (Store and Analytics), HDInsight, Stream Analytics, Azure Active Directory, Operational Insights, App Services \ Web Apps, Compute Services (IaaS & PaaS), SQL Database, Service Bus, Visual Studio Online, VSO Build, Load Test and Application Insights in East US 2 may experience issues accessing services as it is related to the Network and Storage incident. Engineers are working to restore services at this time. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Stream Analytics and Event Hubs - Multiple Regions - Partial Service Interruption

Starting at 10 Sept, 2015 12:22 UTC customers using Stream Analytics and Event Hubs in Multiple Regions will experience significant latency for starting Jobs and accessing data. There is no available workaround at the time. Engineers are actively engaged and working to restore services. The next update will be provided in 60 minutes.

Last Update: A few months ago

Stream Analytics and Event Hubs - Multiple Regions - Advisory

An alert for Stream Analytics and Event Hubs in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Event Hubs - West US - Advisory

Engineers are engaging on an emerging issue where customers deployed in West US may be experiencing Event Hub creation failures on new Service Bus Namespaces. This is not impacting run time operations on active Event Hubs. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Event Hubs - West US - Advisory

Engineers are engaging on an emerging issue where customers deployed in West US may be experiencing Event Hub namespace creation failures. This is not impacting active Event Hub deployment run times. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Event Hubs - West US - Advisory

Engineers are engaging on an emerging issue where you may be experiencing Event Hub namespace creation failures. This is not impacting active Event Hub deployment run times. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Event Hubs - North Europe - Advisory

Starting approximately 21 July, 2015 18:30 UTC customers using Event Hubs in North Europe will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. Workarounds are not available at this time. We have identified a potential root cause, and are working to restore service. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Event Hubs - East US and North Europe - Advisory

From <DD MMM, YYYY HH:HH> to <HH:HH> UTC customers/a subset of customers using Event Hubs in North Europe and East US experienced <Customer Impact or Experience>. <Call to action>. This incident has now been mitigated.

Last Update: A few months ago

Event Hubs - East US and North Europe - Advisory

Starting approximately 21 July, 2015 18:30 UTC customers using Event Hubs in East US and North Europe will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. Workarounds are not available at this time. Engineers are working to restore service at this time. The next update will be provided in 2 hours.

Last Update: A few months ago

Event Hubs - East US and North Europe - Advisory

Starting approximately 21 July, 2015 18:30 UTC customers using Event Hubs in East US and North Europe will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. Workarounds are not available at this time. We have identified a potential root cause, and are working to restore service. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Event Hubs - East US and North Europe - Advisory

Starting approximately 21 July, 2015 18:30 UTC customers using Event Hubs in East US and North Europe will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. No workarounds are available at this time. We are currently evaluating options to restore service. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Event Hubs - East US - Advisory

Starting approximately 21 July, 2015 18:30 UTC customers using Event Hubs in East US and North Europe will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. No workarounds are available at this time. We are currently evaluating options to restore service. The next update will be provided in 2 hours.

Last Update: A few months ago

Event Hubs - East US - Advisory

Starting approximately 21 July, 2015 18:30 UTC customers using Event Hubs in East US will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. No workarounds are available at this time. We are currently evaluating options to restore service. The next update will be provided in 2 hours.

Last Update: A few months ago

Event Hubs - East US - Advisory

Starting at 21 July, 2015 18:30 UTC customers using Event Hubs in East US will experience an error when trying to create a new event hub. Existing event hubs are operational and accessible at this time. There are currently no workarounds available. Engineers are actively investigating. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs - East US - Advisory

An alert for Event Hubs in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Event Hubs - West US - Advisory

Starting at 23 Jun, 2015 22:30 UTC a subset of customers may experience timeouts when using Event Hubs in West US. Engineers have identified a potential root cause, and are currently exploring mitigation options. The next update will be provided within 2 hours, or as events warrant.

Last Update: A few months ago

Event Hubs - West US - Advisory

Starting at 23 Jun, 2015 22:30 UTC a subset of customers may experience timeouts when using Event Hubs in West US. Engineers are continuing to investigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Event Hubs - West US - Advisory

Starting at 23 Jun, 2015 22:30 UTC a subset of customers may experience timeouts when using Event Hubs in West US. Engineers are actively investigating the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Event Hubs - Central US - Advisory

Starting at 20 Apr, 2015 21:00 UTC a subset of customers using Event Hubs in Central US may experience intermittent connectivity issues when sending or receiving events. A refresh should reconnect the services. We have taken action to resolve this incident, and are now confirming that the incident is mitigated. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs - Central US - Advisory

Starting at 20 Apr, 2015 21:00 UTC, a subset of customers using Event Hubs in Central US are experiencing intermittent connectivity issues when sending or receiving events. A refresh should reconnect the services. Engineers are currently evaluating options to restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs - Central US - Advisory

Starting at 20 Apr, 2015 21:00 UTC a subset of customers using Event Hubs in Central US may experience intermittent connectivity issues when sending or receiving events. A refresh should reconnect the services. Engineers are actively working to resolve the incident. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs - Central US - Advisory

An alert for Event Hubs in Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Service Bus and Event Hubs - North Europe - Advisory

Starting at 19 Apr, 2015 09:14 UTC a limited subset of customers using Service Bus and Event Hubs in North Europe may experience issues with completing Management Operations (Create, Update, Delete). Existing Service Bus and Event Hubs Queues and Topics unaffected. Engineers are actively investigating this issue, and an update will be provided within 60 minutes.

Last Update: A few months ago

Service Bus and Event Hubs - North Europe - Advisory

Starting at 19 Apr, 2015 09:14 UTC a limited subset of customers using Service Bus and Event Hubs in North Europe may experience issues with completing Management Operations (Create, Update, Delete). Existing Service Bus and Event Hubs Queues and Topics unaffected. Engineers are actively investigating this issue, and an update will be provided within 60 minutes.

Last Update: A few months ago

Service Bus - North Europe - Advisory

Starting at 19 Apr, 2015 09:14 UTC a limited subset of customers using Service Bus in North Europe may experience issues with completing Management Operations (Create, Update, Delete). Existing Service Bus Queues & Topics are unaffected. Engineers are actively investigating this issue, and an update will be provided within 60 minutes.

Last Update: A few months ago
Check out my other project Contentr.app. Your content marketing tool.

© 2019 - Cloudstatus built by jameskenny