Circle-CI Status

Delays starting jobs

Aug 3, 08:28 UTC Resolved - This incident has been resolved.Aug 3, 08:19 UTC Monitoring - A fix has been implemented and we are monitoring the results.Aug 3, 08:10 UTC Identified - The issue has been identified and we are scaling up to handle the backlog.Aug 3, 07:59 UTC Investigating - We are currently investigating this issue.

Last Update: About 5 hours ago

Delays starting jobs

Aug 3, 08:19 UTC Monitoring - A fix has been implemented and we are monitoring the results.Aug 3, 08:10 UTC Identified - The issue has been identified and we are scaling up to handle the backlog.Aug 3, 07:59 UTC Investigating - We are currently investigating this issue.

Last Update: About 5 hours ago

Delays starting jobs

Aug 3, 08:10 UTC Identified - The issue has been identified and we are scaling up to handle the backlog.Aug 3, 07:59 UTC Investigating - We are currently investigating this issue.

Last Update: About 6 hours ago

Delays starting jobs

Aug 3, 08:10 UTC Identified - The issue has been identified and a fix is being implemented.Aug 3, 07:59 UTC Investigating - We are currently investigating this issue.

Last Update: About 6 hours ago

Delays starting jobs

Aug 3, 07:59 UTC Investigating - We are currently investigating this issue.

Last Update: About 6 hours ago

Delays starting workflows

Jul 28, 17:02 UTC Resolved - Queues have returned to normal. Delayed workflows will start running automatically and require no intervention. If you have a workflow that continues to be delayed and you believe is stuck, please contact Support at support@circleci.comJul 28, 16:38 UTC Monitoring - We have completed scaling our systems. Our metrics indicate a return to normal queue times. We will continue to monitor the recovery of our systems.Jul 28, 16:16 UTC Update - We are continuing to scale up our systems. There is an ongoing delay with starting workflows.Jul 28, 15:50 UTC Update - We are in the process of scaling up to allow jobs to start running.Jul 28, 15:36 UTC Identified - We have identified an issue with our queuing system that has caused workflows to be delayed. We are scaling up to compensate.Jul 28, 15:28 UTC Investigating - We are currently investigating an issue where workflows are delayed in starting.

Last Update: About 5 days ago

Delays starting workflows

Jul 28, 16:38 UTC Monitoring - We have completed scaling our systems. Our metrics indicate a return to normal queue times. We will continue to monitor the recovery of our systems.Jul 28, 16:16 UTC Update - We are continuing to scale up our systems. There is an ongoing delay with starting workflows.Jul 28, 15:50 UTC Update - We are in the process of scaling up to allow jobs to start running.Jul 28, 15:36 UTC Identified - We have identified an issue with our queuing system that has caused workflows to be delayed. We are scaling up to compensate.Jul 28, 15:28 UTC Investigating - We are currently investigating an issue where workflows are delayed in starting.

Last Update: About 5 days ago

Delays starting workflows

Jul 28, 16:16 UTC Update - We are continuing to scale up our systems. There is an ongoing delay with starting workflows.Jul 28, 15:50 UTC Update - We are in the process of scaling up to allow jobs to start running.Jul 28, 15:36 UTC Identified - We have identified an issue with our queuing system that has caused workflows to be delayed. We are scaling up to compensate.Jul 28, 15:28 UTC Investigating - We are currently investigating an issue where workflows are delayed in starting.

Last Update: About 5 days ago

Delays starting workflows

Jul 28, 15:50 UTC Update - We are in the process of scaling up to allow jobs to start running.Jul 28, 15:36 UTC Identified - We have identified an issue with our queuing system that has caused workflows to be delayed. We are scaling up to compensate.Jul 28, 15:28 UTC Investigating - We are currently investigating an issue where workflows are delayed in starting.

Last Update: About 5 days ago

Delays starting workflows

Jul 28, 15:36 UTC Identified - We have identified an issue with our queuing system that has caused workflows to be delayed. We are scaling up to compensate.Jul 28, 15:28 UTC Investigating - We are currently investigating an issue where workflows are delayed in starting.

Last Update: About 5 days ago

Delays starting workflows

Jul 28, 15:28 UTC Investigating - We are currently investigating an issue where workflows are delayed in starting.

Last Update: About 5 days ago

We are investigating DNS errors that may prevent workflows and jobs from running.

Jul 28, 11:50 UTC Resolved - This incident is resolved and jobs should be running as expected.Jul 28, 11:35 UTC Update - Jobs should no longer be impacted, but we are still monitoring to assure full resolution. If you have any ongoing issues with running jobs please submit a support ticket https://support.circleci.com/hc/en-us/requests/new?ticket_form_id=855288Jul 28, 11:05 UTC Monitoring - The underlying DNS issues appear to be recovering and we're monitoring the results. As a precaution we've routed traffic away from the impacted zone to prevent any possible ongoing issues.Jul 28, 10:38 UTC Identified - We've identified the DNS issues are related to an incident happening on a specific AWS Availability Zone. More details can be found on their status page: https://status.aws.amazon.com/ We're working to route traffic around this zone.Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

We are investigating DNS errors that may prevent workflows and jobs from running.

Jul 28, 11:35 UTC Update - Jobs should no longer be impacted, but we are still monitoring to assure full resolution. If you have any ongoing issues with running jobs please submit a support ticket https://support.circleci.com/hc/en-us/requests/new?ticket_form_id=855288Jul 28, 11:05 UTC Monitoring - The underlying DNS issues appear to be recovering and we're monitoring the results. As a precaution we've routed traffic away from the impacted zone to prevent any possible ongoing issues.Jul 28, 10:38 UTC Identified - We've identified the DNS issues are related to an incident happening on a specific AWS Availability Zone. More details can be found on their status page: https://status.aws.amazon.com/ We're working to route traffic around this zone.Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

We are investigating DNS errors that may prevent workflows and jobs from running.

Jul 28, 11:05 UTC Monitoring - The underlying DNS issues appear to be recovering and we're monitoring the results. As a precaution we've routed traffic away from the impacted zone to prevent any possible ongoing issues.Jul 28, 10:38 UTC Identified - We've identified the DNS issues are related to an incident happening on a specific AWS Availability Zone. More details can be found on their status page: https://status.aws.amazon.com/ We're working to route traffic around this zone.Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

We are investigating DNS errors that may prevent workflows and jobs from running.

Jul 28, 10:38 UTC Identified - We've identified the DNS issues are related to an incident happening on a specific AWS Availability Zone. More details can be found on their status page: https://status.aws.amazon.com/ We're working to route traffic around this zone.Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

We are investigating DNS errors that may prevent workflows and jobs from running.

Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

We are investigating DNS errors that may prevent workflows and jobs from running.

Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

We are investigating DNS errors that may workfllows jobs from running.

Jul 28, 09:28 UTC Investigating - We're currently investigating DNS issues that may prevent jobs from running. We'll share a status update in 20 minutes.

Last Update: About 6 days ago

Workflows hanging from 16:15 and 16:22 UTC

Jul 27, 18:36 UTC Resolved - We have canceled all stuck workflows. All workflows running after 16:22 UTC should be processing as normal.Jul 27, 18:28 UTC Update - We are currently monitoring and canceling any remaining hanging workflows. All workflows running after 16:22 UTC should be processing as normal.Jul 27, 17:58 UTC Update - We are continuing to monitor workflows. For workflows with manual job approval, customers will need to re-approve the job. For workflows with jobs that failed, customers can re-run the workflow with the "re-run from failed" button in the UI.Jul 27, 17:20 UTC Monitoring - Workflows created between 16:15 and 16:22 UTC experienced hanging, we are currently working on canceling these workflows. All workflows running after this time should be processing as normal.

Last Update: About 6 days ago

Workflows hanging from 16:15 and 16:22 UTC

Jul 27, 18:28 UTC Update - We are currently monitoring and canceling any remaining hanging workflows. All workflows running after 16:22 UTC should be processing as normal.Jul 27, 17:58 UTC Update - We are continuing to monitor workflows. For workflows with manual job approval, customers will need to re-approve the job. For workflows with jobs that failed, customers can re-run the workflow with the "re-run from failed" button in the UI.Jul 27, 17:20 UTC Monitoring - Workflows created between 16:15 and 16:22 UTC experienced hanging, we are currently working on canceling these workflows. All workflows running after this time should be processing as normal.

Last Update: About 6 days ago

Workflows hanging from 16:15 and 16:22 UTC

Jul 27, 17:58 UTC Update - We are continuing to monitor workflows. For workflows with manual job approval, customers will need to re-approve the job. For workflows with jobs that failed, customers can re-run the workflow with the "re-run from failed" button in the UI.Jul 27, 17:20 UTC Monitoring - Workflows created between 16:15 and 16:22 UTC experienced hanging, we are currently working on canceling these workflows. All workflows running after this time should be processing as normal.

Last Update: About 6 days ago

Workflows hanging from 16:15 and 16:22 UTC

Jul 27, 17:20 UTC Monitoring - Workflows created between 16:15 and 16:22 UTC experienced hanging, we are currently working on canceling these workflows. All workflows running after this time should be processing as normal.

Last Update: About 6 days ago

No workflow notifications being sent

Jul 27, 15:00 UTC Resolved - From 15:23 until 16:15 UTC no workflow notifications were sent. A change was reverted and notifications are back to normal as of 16:26 UTC .

Last Update: About 6 days ago

Delays or Failures in Jobs Starting

Jul 22, 13:30 UTC Resolved - From 13:35 to 13:39 UTC , there may have been potential delays or failure of jobs starting. This was due to a database failover. As of 13:39 UTC , our systems have fully recovered and all jobs are starting successfully.

Last Update: About 12 days ago

Machine and Remote Docker jobs may have encountered longer loading times or timeouts

Jul 20, 12:30 UTC Resolved - Between 13:34 - 14:08 UTC machine executor jobs and jobs that utilize remote docker may have encountered longer load times or timed out. The issue is resolved.

Last Update: About 13 days ago

Third-Party Network Resolver Issues

Jul 17, 23:31 UTC Resolved - All CircleCI systems are fully operational and builds are successfully running. We did not observe a significant impact to our environments due to this Third-Party incident. We are resolving our incident and will move to our standard monitoring protocol.Jul 17, 22:53 UTC Update - Many of the dependency and image libraries are successfully resolving and they are beginning to resolve their incidents on their respective status pages. Builds are triggering and running successfully at this time. We will continue to monitor our systems and respond accordingly if necessary.Jul 17, 21:56 UTC Monitoring - We are currently monitoring an incident impacting Cloudflare's network and determining the impact it may have on our systems. Potential impact would be to slow or failing image pulls or build dependency downloads.

Last Update: About 16 days ago

Third-Party Network Resolver Issues

Jul 17, 22:53 UTC Update - Many of the dependency and image libraries are successfully resolving and they are beginning to resolve their incidents on their respective status pages. Builds are triggering and running successfully at this time. We will continue to monitor our systems and respond accordingly if necessary.Jul 17, 21:56 UTC Monitoring - We are currently monitoring an incident impacting Cloudflare's network and determining the impact it may have on our systems. Potential impact would be to slow or failing image pulls or build dependency downloads.

Last Update: About 16 days ago

Third-Party Network Resolver Issues

Jul 17, 21:56 UTC Monitoring - We are currently monitoring an incident impacting Cloudflare's network and determining the impact it may have on our systems. Potential impact would be to slow or failing image pulls or build dependency downloads.

Last Update: About 16 days ago

Unable to trigger pipelines from Bitbucket

Jul 16, 21:48 UTC Resolved - We have continued to monitor this issue and we have seen no new occurrences. Pipelines will now be triggered from Bitbucket pushes as expected. We are marking this incident as resolved.Jul 16, 21:21 UTC Monitoring - We have rolled back to a previous version, and Bitbucket users can now trigger pipelines by pushing fresh commits. We will continue monitoring the recovery of pipelines.Jul 16, 20:49 UTC Identified - We have identified an issue where Bitbucket project followers were not being correctly identified. We have deployed a solution at this time. Bitbucket users who were affected should push a fresh commit to trigger new pipelines.Jul 16, 20:46 UTC Update - We are continuing to investigate this issue.Jul 16, 20:30 UTC Investigating - We are currently investigating an issue where user commits on Bitbucket are not triggering pipelines on CircleCI, starting from 2020-07-16 19:01 UTC and ongoing.

Last Update: About 17 days ago

Unable to trigger pipelines from Bitbucket

Jul 16, 21:21 UTC Monitoring - We have rolled back to a previous version, and Bitbucket users can now trigger pipelines by pushing fresh commits. We will continue monitoring the recovery of pipelines.Jul 16, 20:49 UTC Identified - We have identified an issue where Bitbucket project followers were not being correctly identified. We have deployed a solution at this time. Bitbucket users who were affected should push a fresh commit to trigger new pipelines.Jul 16, 20:46 UTC Update - We are continuing to investigate this issue.Jul 16, 20:30 UTC Investigating - We are currently investigating an issue where user commits on Bitbucket are not triggering pipelines on CircleCI, starting from 2020-07-16 19:01 UTC and ongoing.

Last Update: About 17 days ago

Unable to trigger pipelines from Bitbucket

Jul 16, 20:49 UTC Identified - We have identified an issue where Bitbucket project followers were not being correctly identified. We have deployed a solution at this time. Bitbucket users who were affected should push a fresh commit to trigger new pipelines.Jul 16, 20:46 UTC Update - We are continuing to investigate this issue.Jul 16, 20:30 UTC Investigating - We are currently investigating an issue where user commits on Bitbucket are not triggering pipelines on CircleCI, starting from 2020-07-16 19:01 UTC and ongoing.

Last Update: About 17 days ago

Unable to trigger pipelines from Bitbucket

Jul 16, 20:46 UTC Update - We are continuing to investigate this issue.Jul 16, 20:30 UTC Investigating - We are currently investigating an issue where user commits on Bitbucket are not triggering pipelines on CircleCI, starting from 2020-07-16 19:01 UTC and ongoing.

Last Update: About 17 days ago

Unable to trigger pipelines from Bitbucket

Jul 16, 20:30 UTC Investigating - We are currently investigating an issue where user commits on Bitbucket are not triggering pipelines on CircleCI, starting from 2020-07-16 19:01 UTC and ongoing.

Last Update: About 17 days ago

Delay in Builds Triggering

Jul 15, 19:01 UTC Resolved - All systems have recovered and builds are triggering successfully. This incident is now resolved.Jul 15, 18:38 UTC Monitoring - GitHub experienced an incident earlier today that caused delays in builds triggering. We have scaled up appropriately and are processing the influx of webhooks. We will continue monitoring the recovery of our systems.Jul 15, 18:06 UTC Identified - Due to the GitHub Incident earlier today, we have identified several of our systems experiencing slow to create builds. We are currently scaling up accordingly.

Last Update: About 18 days ago

Delay in Builds Triggering

Jul 15, 18:38 UTC Monitoring - GitHub experienced an incident earlier today that caused delays in builds triggering. We have scaled up appropriately and are processing the influx of webhooks. We will continue monitoring the recovery of our systems.Jul 15, 18:06 UTC Identified - Due to the GitHub Incident earlier today, we have identified several of our systems experiencing slow to create builds. We are currently scaling up accordingly.

Last Update: About 18 days ago

Delay in Builds Triggering

Jul 15, 18:06 UTC Identified - Due to the GitHub Incident earlier today, we have identified several of our systems experiencing slow to create builds. We are currently scaling up accordingly.

Last Update: About 18 days ago

Job links/pages may return information for the wrong job

Jul 9, 19:12 UTC Resolved - We have continued to monitor this issue and we have seen no new occurrences. We are marking this incident as resolved, however if you have existing jobs having this problem that occurred between 2020-07-07 17:30 UTC to 2020-07-09 14:25 UTC that you need corrected, please submit a support request (https://support.circleci.com/hc/en-us/requests/new) with a link to the affected job. We will then be able to fix it so it directs properly.Jul 9, 16:43 UTC Update - As an update, we are still investigating what options are available for addressing the existing jobs that are redirecting within a project. We will post a further update once we have additional details.Jul 9, 15:19 UTC Update - Our fix for future jobs is deployed and stable. We are currently working on a solution for existing, historical jobs that are still redirecting to previous run jobs in the same project.Jul 9, 14:25 UTC Monitoring - We have implemented a fix for this issue and future jobs will no longer encounter the problem. However, any existing jobs will continue to direct to previously run jobs in the project. We are investigating options to correct the existing jobs.Jul 9, 13:58 UTC Update - To clarify the scope, the job link will only point to another job within the same project. Specifically, a previously run job in another pipeline within the project.Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 24 days ago

Job links/pages may return information for the wrong job

Jul 9, 16:43 UTC Update - As an update, we are still investigating what options are available for addressing the existing jobs that are redirecting within a project. We will post a further update once we have additional details.Jul 9, 15:19 UTC Update - Our fix for future jobs is deployed and stable. We are currently working on a solution for existing, historical jobs that are still redirecting to previous run jobs in the same project.Jul 9, 14:25 UTC Monitoring - We have implemented a fix for this issue and future jobs will no longer encounter the problem. However, any existing jobs will continue to direct to previously run jobs in the project. We are investigating options to correct the existing jobs.Jul 9, 13:58 UTC Update - To clarify the scope, the job link will only point to another job within the same project. Specifically, a previously run job in another pipeline within the project.Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 24 days ago

Job links/pages may return information for the wrong job

Jul 9, 15:19 UTC Update - Our fix for future jobs is deployed and stable. We are currently working on a solution for existing, historical jobs that are still redirecting to previous run jobs in the same project.Jul 9, 14:25 UTC Monitoring - We have implemented a fix for this issue and future jobs will no longer encounter the problem. However, any existing jobs will continue to direct to previously run jobs in the project. We are investigating options to correct the existing jobs.Jul 9, 13:58 UTC Update - To clarify the scope, the job link will only point to another job within the same project. Specifically, a previously run job in another pipeline within the project.Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 24 days ago

Job links/pages may return information for the wrong job

Jul 9, 15:19 UTC Update - Our fix for future jobs is deployed and stable. We are currently working on a solution for existing, historical jobs that are still redirecting to previous run jobs in the same project.Jul 9, 14:25 UTC Monitoring - We have implemented a fix for this issue and future jobs will no longer encounter the problem. However, any existing jobs will continue to direct to previously run jobs in the project. We are investigating options to correct the existing jobs.Jul 9, 13:58 UTC Update - To clarify the scope, the job link will only point to another job within the same project. Specifically, a previously run job in another pipeline within the project.Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 24 days ago

Job links/pages may return information for the wrong job

Jul 9, 14:25 UTC Monitoring - We have implemented a fix for this issue and future jobs will no longer encounter the problem. However, any existing jobs will continue to direct to previously run jobs in the project. We are investigating options to correct the existing jobs.Jul 9, 13:58 UTC Update - To clarify the scope, the job link will only point to another job within the same project. Specifically, a previously run job in another pipeline within the project.Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 24 days ago

Job links/pages may return information for the wrong job

Jul 9, 13:58 UTC Update - To clarify the scope, the job link will only point to another job within the same project. Specifically, a previously run job in another pipeline within the project.Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 25 days ago

Job links/pages may return information for the wrong job

Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong job may open up. This impact may include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 25 days ago

Job links/pages may return information for the wrong job

Jul 9, 13:43 UTC Identified - We have identified an issue where some job links may point to the wrong job. For instance, when clicking a job on the workflow page the wrong jobs may opens up. This impact maybe include jobs which were created within the past 24 hours. We are working on a resolution at this time.

Last Update: About 25 days ago

Delays starting jobs

Jul 6, 14:29 UTC Resolved - All jobs that were delayed have processed and we are no longer experiencing delays starting new jobs.Jul 6, 14:25 UTC Identified - We have identified a delay in starting jobs and have taken the steps needed to help clear the backlog.

Last Update: About 27 days ago

Delays starting jobs

Jul 6, 14:25 UTC Identified - We have identified a delay in starting jobs and have taken the steps needed to help clear the backlog.

Last Update: About 27 days ago

Cannot access to page due to invalid redirects

Jul 3, 07:14 UTC Resolved - This incident has been resolved. Thank you for your patience.Jul 3, 07:04 UTC Update - We are continuing to monitor for any further issues.Jul 3, 07:03 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jul 3, 06:51 UTC Update - Users cannot access CircleCI UI at the moment due to invalid redirects. We have identified the issue with recent changes, and we are reverting it at the moment.Jul 3, 06:49 UTC Identified - The issue has been identified and a fix is being implemented.

Last Update: About 1 month ago

Cannot access to page due to invalid redirects

Jul 3, 07:04 UTC Update - We are continuing to monitor for any further issues.Jul 3, 07:03 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jul 3, 06:51 UTC Update - Users cannot access CircleCI UI at the moment due to invalid redirects. We have identified the issue with recent changes, and we are reverting it at the moment.Jul 3, 06:49 UTC Identified - The issue has been identified and a fix is being implemented.

Last Update: About 1 month ago

Cannot access to page due to invalid redirects

Jul 3, 07:03 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jul 3, 06:51 UTC Update - Users cannot access CircleCI UI at the moment due to invalid redirects. We have identified the issue with recent changes, and we are reverting it at the moment.Jul 3, 06:49 UTC Identified - The issue has been identified and a fix is being implemented.

Last Update: About 1 month ago

Cannot access to page due to invalid redirects

Jul 3, 06:51 UTC Update - Users cannot access CircleCI UI at the moment due to invalid redirects. We have identified the issue with recent changes, and we are reverting it at the moment.Jul 3, 06:49 UTC Identified - The issue has been identified and a fix is being implemented.

Last Update: About 1 month ago

Cannot access to page due to invalid redirects

Jul 3, 06:49 UTC Identified - The issue has been identified and a fix is being implemented.

Last Update: About 1 month ago

Slow docker downloads

Jul 1, 12:07 UTC Resolved - Docker pulls should be back to normal speeds. If you have ongoing performance issues related to docker pulls please submit a support ticket. https://support.circleci.com/hc/en-us/requests/newJul 1, 10:06 UTC Investigating - Some users may be experiencing longer than normal image downloads on the machine executor and remote docker environment. We are currently investigating the issue.

Last Update: About 1 month ago

Slow docker downloads

Jul 1, 10:06 UTC Investigating - Some users may be experiencing longer than normal image downloads on the machine executor and remote docker environment. We are currently investigating the issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 14:49 UTC Resolved - We are no longer seeing occurrences of this issue.Jun 30, 13:29 UTC Update - We are continuing to monitor for any further occurrences.Please contact CircleCI Support if you are experiencing slow Docker operations.Jun 30, 13:29 UTC Monitoring - We are no longer seeing occurrences of this issue.We will continue monitoring.Jun 30, 12:58 UTC Update - Our investigation is still ongoing. Please contact CircleCI Support if your builds are affected.Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 13:29 UTC Update - We are continuing to monitor for any further occurrences.Please contact CircleCI Support if you are experiencing slow Docker operations.Jun 30, 13:29 UTC Monitoring - We are no longer seeing occurrences of this issue.We will continue monitoring.Jun 30, 12:58 UTC Update - Our investigation is still ongoing. Please contact CircleCI Support if your builds are affected.Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 13:29 UTC Update - We are continuing to monitor for any further occurrences.Jun 30, 13:29 UTC Monitoring - We are no longer seeing occurrences of this issue.We will continue monitoring.Jun 30, 12:58 UTC Update - Our investigation is still ongoing. Please contact CircleCI Support if your builds are affected.Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 13:29 UTC Update - We are continuing to monitor for any further issues.Jun 30, 13:29 UTC Monitoring - We are no longer seeing occurrences of this issue.We will continue monitoring.Jun 30, 12:58 UTC Update - Our investigation is still ongoing. Please contact CircleCI Support if your builds are affected.Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 12:58 UTC Update - Our investigation is still ongoing. Please contact CircleCI Support if your builds are affected.Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 12:58 UTC Update - Our investigation is still ongoing. Please contact support if your builds are affected.Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 12:17 UTC Update - We are liaising with GCP and continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 12:17 UTC Update - We are continuing to investigate this issue.Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various registries (Dockerhub, ECR and quay.io)

Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various regisistries (Dockerhub, ECR and quay.io)

Jun 30, 11:18 UTC Update - We are still investigating this issue that is causing slow network traffic and increases in connection failures.Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker operations to various regisistries (Dockerhub, ECR and quay.io)

Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine jobs.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker image pull

Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.The issue is intermittent and is affecting Remote Docker and Machine.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker image pull

Jun 30, 10:45 UTC Investigating - We are seeing a longer duration for Docker image downloads which is causing an increase in builds duration.We are currently investigating this issue.

Last Update: About 1 month ago

Slow Docker image pull

Jun 30, 10:45 UTC Investigating - We are seeing longer duration for Docker image downloads which is causing an increase in builds duration.We are currently investigating this issue.

Last Update: About 1 month ago

Starting jobs delayed

Jun 29, 12:32 UTC Resolved - This incident has been resolved.Jun 29, 12:25 UTC Monitoring - A fix has been implemented and we are monitoring the results.The latency is nearly back to normal.Jun 29, 12:17 UTC Identified - The issue has been identified and a fix is being implemented.Jun 29, 12:17 UTC Investigating - We’ve hit a scaling issue due to the unusual traffic pattern caused by the GitHub outage.

Last Update: About 1 month ago

Starting jobs delayed

Jun 29, 12:25 UTC Monitoring - A fix has been implemented and we are monitoring the results.The latency is nearly back to normal.Jun 29, 12:17 UTC Identified - The issue has been identified and a fix is being implemented.Jun 29, 12:17 UTC Investigating - We’ve hit a scaling issue due to the unusual traffic pattern caused by the GitHub outage.

Last Update: About 1 month ago

Starting jobs delayed

Jun 29, 12:25 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jun 29, 12:17 UTC Identified - The issue has been identified and a fix is being implemented.Jun 29, 12:17 UTC Investigating - We’ve hit a scaling issue due to the unusual traffic pattern caused by the GitHub outage.

Last Update: About 1 month ago

Starting jobs delayed

Jun 29, 12:17 UTC Identified - The issue has been identified and a fix is being implemented.Jun 29, 12:17 UTC Investigating - We’ve hit a scaling issue due to the unusual traffic pattern caused by the GitHub outage.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 12:09 UTC Resolved - GitHub's incident has been resolved (https://www.githubstatus.com).We are now receiving all GitHub webhooks again.Jun 29, 11:27 UTC Monitoring - GitHub services are recovering (https://www.githubstatus.com).We are monitoring the situation on our side.Jun 29, 11:23 UTC Update - We are seeing an improvement in the number of webhooks we receive from GitHub, however this is still significantly still below normal levels.You might still not be able to trigger builds on commit/tag push or pull-request, or to set up CircleCI projects.Jun 29, 10:29 UTC Update - GitHub's outage is still ongoing.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 10:09 UTC Update - GitHub has identified the source of this outage and is working on recovery.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:45 UTC Identified - GitHub is experiencing a major outage. The GitHub service is currently not available.This is preventing CircleCI builds from being triggered on commit/tag push or pull-request.It will also prevent users from setting up projects.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 11:27 UTC Monitoring - GitHub services are recovering (https://www.githubstatus.com).We are monitoring the situation on our side.Jun 29, 11:23 UTC Update - We are seeing an improvement in the number of webhooks we receive from GitHub, however this is still significantly still below normal levels.You might still not be able to trigger builds on commit/tag push or pull-request, or to set up CircleCI projects.Jun 29, 10:29 UTC Update - GitHub's outage is still ongoing.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 10:09 UTC Update - GitHub has identified the source of this outage and is working on recovery.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:45 UTC Identified - GitHub is experiencing a major outage. The GitHub service is currently not available.This is preventing CircleCI builds from being triggered on commit/tag push or pull-request.It will also prevent users from setting up projects.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 11:23 UTC Update - We are seeing an improvement in the number of webhooks we receive from GitHub, however this is still significantly still below normal levels.You might still not be able to trigger builds on commit/tag push or pull-request, or to set up CircleCI projects.Jun 29, 10:29 UTC Update - GitHub's outage is still ongoing.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 10:09 UTC Update - GitHub has identified the source of this outage and is working on recovery.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:45 UTC Identified - GitHub is experiencing a major outage. The GitHub service is currently not available.This is preventing CircleCI builds from being triggered on commit/tag push or pull-request.It will also prevent users from setting up projects.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 10:29 UTC Update - GitHub's outage is still ongoing.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 10:09 UTC Update - GitHub has identified the source of this outage and is working on recovery.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:45 UTC Identified - GitHub is experiencing a major outage. The GitHub service is currently not available.This is preventing CircleCI builds from being triggered on commit/tag push or pull-request.It will also prevent users from setting up projects.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 10:09 UTC Update - GitHub has identified the source of this outage and is working on recovery.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:45 UTC Identified - GitHub is experiencing a major outage. The GitHub service is currently not available.This is preventing CircleCI builds from being triggered on commit/tag push or pull-request.It will also prevent users from setting up projects.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 09:45 UTC Identified - GitHub is experiencing a major outage. The GitHub service is currently not available.This is preventing CircleCI builds from being triggered on commit/tag push or pull-request.It will also prevent users from setting up projects.Please see GitHub's status page (https://www.githubstatus.com)Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

No webhooks received from GitHub

Jun 29, 09:20 UTC Investigating - Since 09:07am [UTC ], we are currently not receiving any webhooks from GitHub (https://www.githubstatus.com/).This is potentially affecting all builds.

Last Update: About 1 month ago

Workflows held in queue

Jun 26, 09:27 UTC Resolved - Workflows may have been held in a queued state between 09:07 and 09:22 UTC . The issue is resolved for all orgs now.

Last Update: About 1 month ago

Database Maintenance

Jun 23, 00:22 UTC Completed - The scheduled maintenance has been completed.Jun 23, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 21:05 UTC Scheduled - Github checks status updates will be paused for a brief period of time during a routine database maintenance starting at 2020-06-23 00:00:00 UTC . Status updates will be queued and then processed after database maintenance is complete

Last Update: About 1 month ago

Database Maintenance

Jun 23, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 21:05 UTC Scheduled - Github checks status updates will be paused for a brief period of time during a routine database maintenance starting at 2020-06-23 00:00:00 UTC . Status updates will be queued and then processed after database maintenance is complete

Last Update: About 1 month ago

Database Maintenance

THIS IS A SCHEDULED EVENT Jun 23, 00:00 - 01:00 UTC Jun 22, 21:05 UTC Scheduled - Github checks status updates will be paused for a brief period of time during a routine database maintenance starting at 2020-06-23 00:00:00 UTC . Status updates will be queued and then processed after database maintenance is complete

Last Update: About 1 month ago

Jobs held in queued status

Jun 22, 09:30 UTC Resolved - Between 10:35 am - 10:50 am UTC jobs may have been held in a queued state. The issue causing this has been identified and resolved. Job processing should now be back to normal.

Last Update: About 1 month ago

Database Maintenance

Jun 22, 01:00 UTC Completed - The scheduled maintenance has been completed.Jun 22, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 18, 19:04 UTC Scheduled - Github checks status updates will be paused for a brief period of time during a routine database maintenance starting at 2020-06-22 00:00:00 UTC . Status updates will be queued and then processed after database maintenance is complete

Last Update: About 1 month ago

Database Maintenance

Jun 22, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 18, 19:04 UTC Scheduled - Github checks status updates will be paused for a brief period of time during a routine database maintenance starting at 2020-06-22 00:00:00 UTC . Status updates will be queued and then processed after database maintenance is complete

Last Update: About 1 month ago

Database Maintenance

THIS IS A SCHEDULED EVENT Jun 22, 00:00 - 01:00 UTC Jun 18, 19:04 UTC Scheduled - Github checks status updates will be paused for a brief period of time during a routine database maintenance starting at 2020-06-22 00:00:00 UTC . Status updates will be queued and then processed after database maintenance is complete

Last Update: About 1 month ago

Issues accessing the CircleCI Web UI

Jun 9, 21:21 UTC Resolved - We are no longer seeing errors in the CircleCI Web UI. The incident is now resolved.Jun 9, 21:18 UTC Monitoring - We have identified the issue and am monitoring a fix. We are currently not seeing additional errors in the CircleCI Web UI.Jun 9, 20:52 UTC Update - We are seeing errors for users attempting to access the CircleCI Web UI. Customers using the CircleCI CLI tool may also see errors.Jun 9, 20:50 UTC Investigating - We are seeing errors for users attempting to access the CircleCI Web UI

Last Update: About 1 month ago

Issues accessing the CircleCI Web UI

Jun 9, 21:18 UTC Monitoring - We have identified the issue and am monitoring a fix. We are currently not seeing additional errors in the CircleCI Web UI.Jun 9, 20:52 UTC Update - We are seeing errors for users attempting to access the CircleCI Web UI. Customers using the CircleCI CLI tool may also see errors.Jun 9, 20:50 UTC Investigating - We are seeing errors for users attempting to access the CircleCI Web UI

Last Update: About 1 month ago

Issues accessing the CircleCI Web UI

Jun 9, 20:52 UTC Update - We are seeing errors for users attempting to access the CircleCI Web UI. Customers using the CircleCI CLI tool may also see errors.Jun 9, 20:50 UTC Investigating - We are seeing errors for users attempting to access the CircleCI Web UI

Last Update: About 1 month ago

Issues accessing the CircleCI Web UI

Jun 9, 20:50 UTC Investigating - We are seeing errors for users attempting to access the CircleCI Web UI

Last Update: About 1 month ago

Builds failing with "Config Processing Error" (Orbs)

Jun 2, 12:25 UTC Resolved - This incident has been resolved.Jun 2, 12:16 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jun 2, 12:12 UTC Identified - The issue has been identified and a fix is being implemented.Jun 2, 11:57 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error" (Orbs)

Jun 2, 12:16 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jun 2, 12:12 UTC Identified - The issue has been identified and a fix is being implemented.Jun 2, 11:57 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error" (Orbs)

Jun 2, 12:12 UTC Identified - The issue has been identified and a fix is being implemented.Jun 2, 11:57 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error" (Orbs)

Jun 2, 11:57 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error"

Jun 2, 09:51 UTC Resolved - This incident has been resolved.Jun 2, 09:38 UTC Monitoring - The rollback has now completed and we are monitoring the situation.Jun 2, 09:22 UTC Identified - We've identified an issue in a recent deployment. We are now rolling back the change.Jun 2, 09:20 UTC Update - We are continuing to investigate this issue.Jun 2, 09:20 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error"

Jun 2, 09:38 UTC Monitoring - The rollback has now completed and we are monitoring the situation.Jun 2, 09:22 UTC Identified - We've identified an issue in a recent deployment. We are now rolling back the change.Jun 2, 09:20 UTC Update - We are continuing to investigate this issue.Jun 2, 09:20 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error"

Jun 2, 09:22 UTC Identified - We've identified an issue in a recent deployment. We are now rolling back the change.Jun 2, 09:20 UTC Update - We are continuing to investigate this issue.Jun 2, 09:20 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds failing with "Config Processing Error"

Jun 2, 09:20 UTC Update - We are continuing to investigate this issue.Jun 2, 09:20 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 08:31 UTC Resolved - All builds show up as expected in the old UI. This issue is resolved.Jun 1, 08:20 UTC Monitoring - A fix has been implemented and we expect still some delay for loading the UI.Jun 1, 07:43 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 06:38 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 05:12 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 08:20 UTC Monitoring - A fix has been implemented and we expect still some delay for loading the UI.Jun 1, 07:43 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 06:38 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 05:12 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 07:43 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 06:38 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 05:12 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 06:38 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 05:12 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

Shows Bad Gateway error

Jun 1, 06:38 UTC Resolved - This incident has recovered.Jun 1, 06:33 UTC Monitoring - We have recovered a database service and continue to monitor the situation.Jun 1, 06:26 UTC Update - We are continuing to investigate this issue.Jun 1, 06:24 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Shows Bad Gateway error

Jun 1, 06:33 UTC Monitoring - We have recovered a database service and continue to monitor the situation.Jun 1, 06:26 UTC Update - We are continuing to investigate this issue.Jun 1, 06:24 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Shows Bad Gateway error

Jun 1, 06:26 UTC Update - We are continuing to investigate this issue.Jun 1, 06:24 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 05:12 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

Shows Bad Gateway error

Jun 1, 06:24 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 05:12 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 04:33 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 04:05 UTC Update - We are continuing to work on a fix for this issue.Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 03:28 UTC Identified - We identified the issue with a problem on the MongoDB replication lagJun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

New jobs don't show up on the old UI

Jun 1, 02:21 UTC Investigating - We are currently working on the issue that new jobs don't show up on the old UI. This incident related to the problem to get recent builds from 1.1 API. Note: the new UI shows the new pipelines.

Last Update: A few months ago

macOS builds are failing to run

May 29, 18:17 UTC Resolved - New macOS builds are now running. This issue is resolved.May 29, 17:53 UTC Monitoring - We have rolled back the change and VMs are booting again.May 29, 17:47 UTC Identified - We identified the issue with a commit causing builds not to run.May 29, 17:44 UTC Investigating - The macOS build system is currently experiencing an outage and new builds are failing to run.

Last Update: A few months ago

macOS builds are failing to run

May 29, 17:53 UTC Monitoring - We have rolled back the change and VMs are booting again.May 29, 17:47 UTC Identified - We identified the issue with a commit causing builds not to run.May 29, 17:44 UTC Investigating - The macOS build system is currently experiencing an outage and new builds are failing to run.

Last Update: A few months ago

macOS builds are failing to run

May 29, 17:47 UTC Identified - We identified the issue with a commit causing builds not to run.May 29, 17:44 UTC Investigating - The macOS build system is currently experiencing an outage and new builds are failing to run.

Last Update: A few months ago

macOS builds are failing to run

May 29, 17:44 UTC Investigating - The macOS build system is currently experiencing an outage and new builds are failing to run.

Last Update: A few months ago

MacOS Network Update

May 23, 23:39 UTC Completed - The scheduled maintenance has been completed.May 23, 23:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 18, 16:38 UTC Scheduled - There will be a brief maintenance on the network serving a small portion of our macOS build machines on Saturday, May 23, 2020, between 23:00 UTC and 00:00 UTC (between 16:00 PST and 17:00 PST). This will be a network configuration update which will allow us to further increase macOS capacity.During this time there may be a brief drop in network connectivity to some of the macOS machines. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

MacOS Network Update

May 23, 23:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 18, 16:38 UTC Scheduled - There will be a brief maintenance on the network serving a small portion of our macOS build machines on Saturday, May 23, 2020, between 23:00 UTC and 00:00 UTC (between 16:00 PST and 17:00 PST). This will be a network configuration update which will allow us to further increase macOS capacity.During this time there may be a brief drop in network connectivity to some of the macOS machines. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Builds are failing with "Unauthorized"

May 22, 01:48 UTC Resolved - All builds are now running as expected. This issue is resolved.May 22, 01:38 UTC Monitoring - We have implemented a fix and affected builds appear to be running again. We are currently monitoring the situation.May 22, 01:26 UTC Update - Some customer builds are failing with "Unauthorized". We are currently investigating the issue.May 22, 01:21 UTC Update - We are continuing to investigate this issue.May 22, 01:20 UTC Investigating - Customer builds are failing with "Unauthorized". We are currently investigating the issue.

Last Update: A few months ago

Builds are failing with "Unauthorized"

May 22, 01:38 UTC Monitoring - We have implemented a fix and affected builds appear to be running again. We are currently monitoring the situation.May 22, 01:26 UTC Update - Some customer builds are failing with "Unauthorized". We are currently investigating the issue.May 22, 01:21 UTC Update - We are continuing to investigate this issue.May 22, 01:20 UTC Investigating - Customer builds are failing with "Unauthorized". We are currently investigating the issue.

Last Update: A few months ago

Builds are failing with "Unauthorized"

May 22, 01:26 UTC Update - Some customer builds are failing with "Unauthorized". We are currently investigating the issue.May 22, 01:21 UTC Update - We are continuing to investigate this issue.May 22, 01:20 UTC Investigating - Customer builds are failing with "Unauthorized". We are currently investigating the issue.

Last Update: A few months ago

Builds are failing with "Unauthorized"

May 22, 01:21 UTC Update - We are continuing to investigate this issue.May 22, 01:20 UTC Investigating - Customer builds are failing with "Unauthorized". We are currently investigating the issue.

Last Update: A few months ago

Docker pulls are failing with "image not found"

May 19, 20:09 UTC Resolved - Customers experienced issues pulling Docker images with the error "image not found". Our metrics indicate the Docker pull times have returned to normal, and we are continuing to monitor the situation.

Last Update: A few months ago

MacOS Network Update

THIS IS A SCHEDULED EVENT May 23, 23:00 UTC - May 24, 00:00 UTC May 18, 16:38 UTC Scheduled - There will be a brief maintenance on the network serving a small portion of our macOS build machines on Saturday, May 23, 2020, between 23:00 UTC and 00:00 UTC (between 16:00 PST and 17:00 PST). This will be a network configuration update which will allow us to further increase macOS capacity.During this time there may be a brief drop in network connectivity to some of the macOS machines. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 17:04 UTC Resolved - This incident has been resolved.May 14, 16:50 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 14, 16:29 UTC Update - The issue is caused by slow responses from our cloud provider, and we’re working around the issue.May 14, 16:16 UTC Update - We are continuing to work on a fix for this issue.May 14, 15:51 UTC Update - We are continuing to experience delayed machine and remote Docker jobs, and we are working to resolve this.May 14, 15:28 UTC Identified - Machine and remote Docker jobs are delayed.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 16:50 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 14, 16:29 UTC Update - The issue is caused by slow responses from our cloud provider, and we’re working around the issue.May 14, 16:16 UTC Update - We are continuing to work on a fix for this issue.May 14, 15:51 UTC Update - We are continuing to experience delayed machine and remote Docker jobs, and we are working to resolve this.May 14, 15:28 UTC Identified - Machine and remote Docker jobs are delayed.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 16:29 UTC Update - The issue is caused by slow responses from our cloud provider, and we’re working around the issue.May 14, 16:16 UTC Update - We are continuing to work on a fix for this issue.May 14, 15:51 UTC Update - We are continuing to experience delayed machine and remote Docker jobs, and we are working to resolve this.May 14, 15:28 UTC Identified - Machine and remote Docker jobs are delayed.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 16:16 UTC Update - We are continuing to work on a fix for this issue.May 14, 15:51 UTC Update - We are continuing to experience delayed machine and remote Docker jobs, and we are working to resolve this.May 14, 15:28 UTC Identified - Machine and remote Docker jobs are delayed.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 15:51 UTC Update - We are continuing to experience delayed machine and remote Docker jobs, and we are working to resolve this.May 14, 15:28 UTC Identified - Machine and remote Docker jobs are delayed.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 15:28 UTC Identified - Machine and remote Docker jobs are delayed.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 13:41 UTC Resolved - Machine and remote Docker job wait times have returned to normal.May 14, 13:32 UTC Monitoring - We are monitoring delayed machine and remote Docker jobs

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 13:31 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 13:32 UTC Monitoring - We are monitoring delayed machine and remote Docker jobs

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 13:31 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 11:55 UTC Resolved - Machine and remote Docker job wait times have returned to normal.May 14, 11:51 UTC Update - We are continuing to monitor machine jobs, with some images seeing a residual scaling backlog.May 14, 11:24 UTC Monitoring - Machine and remote Docker jobs are continuing to recover, and wait time is returning to normal.May 14, 11:10 UTC Identified - We have identified the issue, which is related to our cloud provider’s volume management. This is in recovery.May 14, 10:53 UTC Update - We are continuing to investigate delays while creating machine and remote Docker jobs.May 14, 10:32 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 11:51 UTC Update - We are continuing to monitor machine jobs, with some images seeing a residual scaling backlog.May 14, 11:24 UTC Monitoring - Machine and remote Docker jobs are continuing to recover, and wait time is returning to normal.May 14, 11:10 UTC Identified - We have identified the issue, which is related to our cloud provider’s volume management. This is in recovery.May 14, 10:53 UTC Update - We are continuing to investigate delays while creating machine and remote Docker jobs.May 14, 10:32 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 11:24 UTC Monitoring - Machine and remote Docker jobs are continuing to recover, and wait time is returning to normal.May 14, 11:10 UTC Identified - We have identified the issue, which is related to our cloud provider’s volume management. This is in recovery.May 14, 10:53 UTC Update - We are continuing to investigate delays while creating machine and remote Docker jobs.May 14, 10:32 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 11:10 UTC Identified - We have identified the issue, which is related to our cloud provider’s volume management. This is in recovery.May 14, 10:53 UTC Update - We are continuing to investigate delays while creating machine and remote Docker jobs.May 14, 10:32 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 10:53 UTC Update - We are continuing to investigate delays while creating machine and remote Docker jobs.May 14, 10:32 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Delayed Machine and Remote Docker Jobs

May 14, 10:32 UTC Investigating - We are currently investigating an issue causing delayed machine jobs, which includes remote Docker instances.

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 23:27 UTC Resolved - We have implemented a fix for out of memory errors and customers should no longer be experiencing issues with failed builds.May 13, 23:14 UTC Update - We are continuing to investigate this issue.May 13, 22:58 UTC Update - We are continuing to investigate this issue regarding Docker jobs.May 13, 22:56 UTC Update - We are continuing to investigate this issue.May 13, 22:36 UTC Update - We are continuing to investigate this issue.May 13, 22:10 UTC Update - We are continuing to investigate this issue regarding out of memory errors.May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Intermittent UI issues due to a Bitbucket service disruption

May 13, 23:20 UTC Resolved - The Bitbucket service disruption is resolved and we are no longer seeing any intermittent CircleCI UI issues. The issue is resolved.May 13, 22:49 UTC Update - We are seeing improvement in the Bitbucket service disruption and are no longer seeing intermittent CircleCI UI issues. We are continuing to monitor the situation.May 13, 22:17 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:55 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:28 UTC Monitoring - We are seeing intermittent UI issues due to a Bitbucket service disruption. We are monitoring the situation.

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 23:14 UTC Update - We are continuing to investigate this issue.May 13, 22:58 UTC Update - We are continuing to investigate this issue regarding Docker jobs.May 13, 22:56 UTC Update - We are continuing to investigate this issue.May 13, 22:36 UTC Update - We are continuing to investigate this issue.May 13, 22:10 UTC Update - We are continuing to investigate this issue regarding out of memory errors.May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 22:58 UTC Update - We are continuing to investigate this issue regarding Docker jobs.May 13, 22:56 UTC Update - We are continuing to investigate this issue.May 13, 22:36 UTC Update - We are continuing to investigate this issue.May 13, 22:10 UTC Update - We are continuing to investigate this issue regarding out of memory errors.May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 22:56 UTC Update - We are continuing to investigate this issue.May 13, 22:36 UTC Update - We are continuing to investigate this issue.May 13, 22:10 UTC Update - We are continuing to investigate this issue regarding out of memory errors.May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Intermittent UI issues due to a Bitbucket service disruption

May 13, 22:49 UTC Update - We are seeing improvement in the Bitbucket service disruption and are no longer seeing intermittent CircleCI UI issues. We are continuing to monitor the situation.May 13, 22:17 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:55 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:28 UTC Monitoring - We are seeing intermittent UI issues due to a Bitbucket service disruption. We are monitoring the situation.

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 22:36 UTC Update - We are continuing to investigate this issue.May 13, 22:10 UTC Update - We are continuing to investigate this issue regarding out of memory errors.May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Intermittent UI issues due to a Bitbucket service disruption

May 13, 22:17 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:55 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:28 UTC Monitoring - We are seeing intermittent UI issues due to a Bitbucket service disruption. We are monitoring the situation.

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 22:10 UTC Update - We are continuing to investigate this issue regarding out of memory errors.May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Intermittent UI issues due to a Bitbucket service disruption

May 13, 21:55 UTC Update - We are continuing to monitor the Bitbucket service disruption and intermittent CircleCI UI issues.May 13, 21:28 UTC Monitoring - We are seeing intermittent UI issues due to a Bitbucket service disruption. We are monitoring the situation.

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 21:45 UTC Update - We are continuing to investigate this issue.May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Some customers experiencing out of memory errors for unchanged builds

May 13, 21:44 UTC Investigating - We are currently investigating why some customers on Container and Performance plans are experiencing out of memory errors

Last Update: A few months ago

Intermittent UI issues due to a Bitbucket service disruption

May 13, 21:28 UTC Monitoring - We are seeing intermittent UI issues due to a Bitbucket service disruption. We are monitoring the situation.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 12:32 UTC Resolved - We are processing jobs for all XCode images normally.Apr 28, 12:21 UTC Monitoring - Jobs for all XCode images are returning to normal.Apr 28, 12:06 UTC Update - Jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are continuing to suffering long delays. We are working on a resolution - changing versions can help avoid this delay.Apr 28, 11:45 UTC Update - We have identified that only jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are suffering long delays. We are continuing to work on a resolution, but in the meantime changing versions can help avoid this delay.Apr 28, 11:21 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 11:01 UTC Update - We are continuing to experience delayed macOS jobs, and are working to resolve this.Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 12:21 UTC Monitoring - Jobs for all XCode images are returning to normal.Apr 28, 12:06 UTC Update - Jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are continuing to suffering long delays. We are working on a resolution - changing versions can help avoid this delay.Apr 28, 11:45 UTC Update - We have identified that only jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are suffering long delays. We are continuing to work on a resolution, but in the meantime changing versions can help avoid this delay.Apr 28, 11:21 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 11:01 UTC Update - We are continuing to experience delayed macOS jobs, and are working to resolve this.Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 12:06 UTC Update - Jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are continuing to suffering long delays. We are working on a resolution - changing versions can help avoid this delay.Apr 28, 11:45 UTC Update - We have identified that only jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are suffering long delays. We are continuing to work on a resolution, but in the meantime changing versions can help avoid this delay.Apr 28, 11:21 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 11:01 UTC Update - We are continuing to experience delayed macOS jobs, and are working to resolve this.Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 11:45 UTC Update - We have identified that only jobs running XCode images 9.4.1, 10.3.0 and 10.1.0 are suffering long delays. We are continuing to work on a resolution, but in the meantime changing versions can help avoid this delay.Apr 28, 11:21 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 11:01 UTC Update - We are continuing to experience delayed macOS jobs, and are working to resolve this.Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 11:21 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 11:01 UTC Update - We are continuing to experience delayed macOS jobs, and are working to resolve this.Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 11:01 UTC Update - We are continuing to experience delayed macOS jobs, and are working to resolve this.Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 10:39 UTC Update - We are continuing to work towards resolving delayed macOS jobs.Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 10:16 UTC Update - We are continuing to work on resolving delayed macOS jobs.Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 09:55 UTC Update - We are continuing to work on a fix for delayed macOS jobs.Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 09:29 UTC Identified - We have identified issues with our macOS network, and we are working to resolve this.Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 09:12 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor for stability and performance.Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 09:07 UTC Identified - We are experiencing network issues with our macOS fleet, and are working to resolve this.Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

macOS Builds Delayed

Apr 28, 09:03 UTC Investigating - macOS builds are delayed, we are investigating this issue.

Last Update: A few months ago

MacOS jobs not running

Apr 26, 13:24 UTC Resolved - We are processing jobs normallyApr 26, 13:03 UTC Monitoring - Connectivity to our colocation provider was impacted, and has now been restored. We are monitoring our platform for performance and stability.Apr 26, 12:48 UTC Investigating - We are investigating a network issue that impacts our macOS environment. Users may be unable to start macOS jobs

Last Update: A few months ago

MacOS jobs not running

Apr 26, 13:03 UTC Monitoring - Connectivity to our colocation provider was impacted, and has now been restored. We are monitoring our platform for performance and stability.Apr 26, 12:48 UTC Investigating - We are investigating a network issue that impacts our macOS environment. Users may be unable to start macOS jobs

Last Update: A few months ago

MacOS jobs not running

Apr 26, 12:48 UTC Investigating - We are investigating a network issue that impacts our macOS environment. Users may be unable to start macOS jobs

Last Update: A few months ago

macOS jobs not starting

Apr 26, 12:44 UTC Resolved - We are currently investigating this issue.

Last Update: A few months ago

MacOS jobs not running

Apr 26, 06:41 UTC Resolved - The macOS environment is stable and jobs are executing normallyApr 26, 06:19 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor our platform for stability and performance.Apr 26, 05:40 UTC Identified - Our colocation provider has exceeded their scheduled network maintenance window without restoring connectivity. We are monitoring the situation and will provide updates as new information becomes available.

Last Update: A few months ago

MacOS jobs not running

Apr 26, 06:19 UTC Monitoring - We are once again processing macOS jobs. We will continue to monitor our platform for stability and performance.Apr 26, 05:40 UTC Identified - Our colocation provider has exceeded their scheduled network maintenance window without restoring connectivity. We are monitoring the situation and will provide updates as new information becomes available.

Last Update: A few months ago

MacOS jobs not running

Apr 26, 05:40 UTC Identified - Our colocation provider has exceeded their scheduled network maintenance window without restoring connectivity. We are monitoring the situation and will provide updates as new information becomes available.

Last Update: A few months ago

macOS Maintenance

Apr 26, 05:00 UTC Completed - The scheduled maintenance has been completed.Apr 26, 04:34 UTC Update - We are experiencing longer than expected downtime during planned network maintenance.  Users will experience delays starting MacOS jobs, and running jobs may fail.Apr 26, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 20, 16:40 UTC Update - We will be undergoing scheduled maintenance during this time.Apr 18, 16:07 UTC Scheduled - There will be maintenance for our macOS machines on Sunday, April 26, 2020, between the hours of 01:00 UTC and 04:00 UTC (Saturday, April 25, 2020 between 18:00 PST and 22:00 PST).During this time, we expect that there will be periods when macOS jobs will be delayed or failed as network interruptions occur. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

macOS Maintenance

Apr 26, 04:34 UTC Update - We are experiencing longer than expected downtime during planned network maintenance.  Users will experience delays starting MacOS jobs, and running jobs may fail.Apr 26, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 20, 16:40 UTC Update - We will be undergoing scheduled maintenance during this time.Apr 18, 16:07 UTC Scheduled - There will be maintenance for our macOS machines on Sunday, April 26, 2020, between the hours of 01:00 UTC and 04:00 UTC (Saturday, April 25, 2020 between 18:00 PST and 22:00 PST).During this time, we expect that there will be periods when macOS jobs will be delayed or failed as network interruptions occur. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

macOS Maintenance

Apr 26, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 20, 16:40 UTC Update - We will be undergoing scheduled maintenance during this time.Apr 18, 16:07 UTC Scheduled - There will be maintenance for our macOS machines on Sunday, April 26, 2020, between the hours of 01:00 UTC and 04:00 UTC (Saturday, April 25, 2020 between 18:00 PST and 22:00 PST).During this time, we expect that there will be periods when macOS jobs will be delayed or failed as network interruptions occur. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Pipelines page is degraded, and notifications are delayed

Apr 22, 15:45 UTC Resolved - We had an incident affecting the CircleCI UI (Pipelines page) and notifications, today (April 22nd) from around 14:45 to around 15:20 [UTC ]. This incident has now been resolved.Apr 22, 15:23 UTC Update - We are continuing to monitor for any further issues.Apr 22, 15:23 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 22, 15:18 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Pipelines page is degraded, and notifications are delayed

Apr 22, 15:23 UTC Update - We are continuing to monitor for any further issues.Apr 22, 15:23 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 22, 15:18 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Pipelines page is degraded, and notifications are delayed

Apr 22, 15:18 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

MacOS Maintenance

THIS IS A SCHEDULED EVENT Apr 26, 01:00 - 05:00 UTC Apr 18, 16:07 UTC Scheduled - There will be maintenance for our macOS machines on Sunday, April 26, 2020, between the hours of 01:00 UTC and 04:00 UTC (Saturday, April 25, 2020 between 18:00 PST and 22:00 PST).During this time, we expect that there will be periods when macOS jobs will be delayed or failed as network interruptions occur. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Git Checkout Errors

Apr 16, 11:50 UTC Resolved - A change to how SSH git checkouts were handled has been reverted, which affected around 5% of checkout attempts. This occurred between 10:50UTC - 11:46UTC .

Last Update: A few months ago

Pipelines Page Inaccessible (500 Error)

Apr 16, 07:30 UTC Resolved - The pipelines page was temporarily inaccessible, and we have rolled back a change to resolve the issue. This occurred between 07:50UTC - 08:42UTC .

Last Update: A few months ago

The Pipelines UI Was Inaccessible

Apr 15, 19:00 UTC Resolved - Between 19:12 and 19:22 UTC , the Pipelines UI was inaccessible. Customers would have received error messages when attempting to view the pipelines, workflows, and jobs views on the new UI. We have rolled back the change and customers should no longer be experiencing issues navigating or viewing the UI.

Last Update: A few months ago

The Pipelines UI Was Inaccessible

Apr 16, 02:00 UTC Resolved - Between 19:12 and 19:22 UTC , the Pipelines UI was inaccessible. Customers would have received error messages when attempting to view the pipelines, workflows, and jobs views on the new UI. We have rolled back the change and customers should no longer be experiencing issues navigating or viewing the UI.

Last Update: A few months ago

Cron Jobs and Scheduled Workflows failed with "Unauthorized"

Apr 14, 14:00 UTC Resolved - Between 14:10 and 18:48 UTC , some Cron Jobs and Scheduled Workflows may have failed with "Unauthorized". This was due to a breaking change that was recently deployed. As of 18:48 UTC , we have rolled back that change and all Cron Jobs and Scheduled Workflows are processing normally.

Last Update: A few months ago

Delay in sending Slack and email notifications

Apr 9, 16:36 UTC Resolved - This is now resolved.Apr 9, 16:29 UTC Monitoring - We are no longer seeing delays in notification and are monitoring the situation. All jobs are continuing to run normally.Apr 9, 16:14 UTC Investigating - We are investigating a delay in sending Slack and email notifications. All jobs are running normally.

Last Update: A few months ago

Delay in sending Slack and email notifications

Apr 9, 16:29 UTC Monitoring - We are no longer seeing delays in notification and are monitoring the situation. All jobs are continuing to run normally.Apr 9, 16:14 UTC Investigating - We are investigating a delay in sending Slack and email notifications. All jobs are running normally.

Last Update: A few months ago

Delay in sending Slack and email notifications

Apr 9, 16:14 UTC Investigating - We are investigating a delay in sending Slack and email notifications. All jobs are running normally.

Last Update: A few months ago

Machine and Remote Docker jobs failing

Apr 8, 15:16 UTC Resolved - This incident has been resolved.Apr 8, 14:48 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 8, 14:30 UTC Identified - The issue has been identified and a fix is being implemented.Apr 8, 14:19 UTC Update - We are continuing to investigate this issue.Apr 8, 14:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker jobs failing

Apr 8, 14:48 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 8, 14:30 UTC Identified - The issue has been identified and a fix is being implemented.Apr 8, 14:19 UTC Update - We are continuing to investigate this issue.Apr 8, 14:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker jobs failing

Apr 8, 14:30 UTC Identified - The issue has been identified and a fix is being implemented.Apr 8, 14:19 UTC Update - We are continuing to investigate this issue.Apr 8, 14:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker jobs failing

Apr 8, 14:19 UTC Update - We are continuing to investigate this issue.Apr 8, 14:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker jobs failing

Apr 8, 14:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Database Failover

Apr 8, 00:00 UTC Resolved - A database failover occurred between 13:00 UTC and 13:04 UTC , which may have caused some workflows to become stuck. We have manually cancelled these jobs and they can be retried. Cancelled jobs will appear as being cancelled by an "unregistered user".

Last Update: A few months ago

Scheduled Database Maintenance

THIS IS A SCHEDULED EVENT Apr 5, 01:00 - 05:00 UTC Mar 31, 18:25 UTC Scheduled - The CircleCI team will be performing scheduled database maintenance on Sunday, April 5, 2020, between the hours of 01:00 UTC and 05:00 UTC (Saturday, April 4, 2020 between 18:00 PDT and 22:00 PDT ).During this time, we expect that there will be periods when jobs do not run, running jobs are canceled, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.

Last Update: A few months ago

GitHub Webhooks delayed

Mar 31, 09:37 UTC Resolved - Incoming webhooks from GitHub have stabilised and builds/statuses should no longer be delayedMar 31, 08:00 UTC Monitoring - Builds and statuses are currently delayed due to Github webhook delayed

Last Update: A few months ago

GitHub Webhooks delayed

Mar 31, 08:00 UTC Monitoring - Builds and statuses are currently delayed due to Github webhook delayed

Last Update: A few months ago

GitHub Webhooks delayed

Mar 31, 08:58 UTC Monitoring - Builds and statuses are currently delayed due to Github webhook delayed

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 21:29 UTC Resolved - This incident has been resolved.Mar 30, 21:29 UTC Update - All executors are functional and jobs are running at their normal rate.Mar 30, 21:05 UTC Update - We are seeing jobs now starting at their normal rate.Mar 30, 20:32 UTC Update - We are still experiencing 2 minutes delay for all the executors and working to improve it further.Mar 30, 20:10 UTC Update - We are still experiencing 4 minutes delay for all the executors and working to improve it further.Mar 30, 19:46 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 19:22 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 21:05 UTC Update - We are seeing jobs now starting at their normal rate.Mar 30, 20:32 UTC Update - We are still experiencing 2 minutes delay for all the executors and working to improve it further.Mar 30, 20:10 UTC Update - We are still experiencing 4 minutes delay for all the executors and working to improve it further.Mar 30, 19:46 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 19:22 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 20:32 UTC Update - We are still experiencing 2 minutes delay for all the executors and working to improve it further.Mar 30, 20:10 UTC Update - We are still experiencing 4 minutes delay for all the executors and working to improve it further.Mar 30, 19:46 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 19:22 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 20:10 UTC Update - We are still experiencing 4 minutes delay for all the executors and working to improve it further.Mar 30, 19:46 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 19:22 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 19:46 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 19:22 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 19:22 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 18:55 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 18:30 UTC Update - We are still experiencing 6 minutes delay for all the executors and working to improve it further.Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 18:15 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 17:50 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 17:27 UTC Update - Queue time is improving. We are continuing to work on a fix for this issue.Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 16:59 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 16:52 UTC Update - We are continuing to work on a fix for this issue.Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 16:29 UTC Identified - The issue has been identified and a fix is being implemented.Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 16:02 UTC Update - We are continuing to investigate this issue.Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue Times for All Jobs

Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increased Queue times for all jobs

Mar 30, 15:33 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Jobs Queueing

Mar 27, 21:48 UTC Resolved - Build start times for all executor types have returned to normal. We are continuing to monitor the situation for reoccurrence.Mar 27, 21:43 UTC Monitoring - We are continuing to monitor the queue for macOS builds.Mar 27, 21:38 UTC Update - The Incident has been resolved for the Docker and Machine executors. We are monitoring the queue for macOS builds.Mar 27, 21:02 UTC Update - macOS builds have begun running however we are seeing degraded performance as the queue is processing.We are seeing degraded performance with Google Cloud Platform Google Compute Engine.Mar 27, 20:48 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 21:43 UTC Monitoring - We are continuing to monitor the queue for macOS builds.Mar 27, 21:38 UTC Update - The Incident has been resolved for the Docker and Machine executors. We are monitoring the queue for macOS builds.Mar 27, 21:02 UTC Update - macOS builds have begun running however we are seeing degraded performance as the queue is processing.We are seeing degraded performance with Google Cloud Platform Google Compute Engine.Mar 27, 20:48 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 21:38 UTC Update - The Incident has been resolved for the Docker and Machine executors. We are monitoring the queue for macOS builds.Mar 27, 21:02 UTC Update - macOS builds have begun running however we are seeing degraded performance as the queue is processing.We are seeing degraded performance with Google Cloud Platform Google Compute Engine.Mar 27, 20:48 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 21:02 UTC Update - macOS builds have begun running however we are seeing degraded performance as the queue is processing.We are seeing degraded performance with Google Cloud Platform Google Compute Engine.Mar 27, 20:48 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 20:48 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 20:48 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 20:03 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 19:34 UTC Update - We are continuing to work towards a fix for the inability to start macOS builds.Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 19:07 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 18:38 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 18:12 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 17:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 17:13 UTC Update - We have disabled a replica sync process that appeared to be placing extra load on Mongo.Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 16:52 UTC Update - macOS builds are running however builds may queue over the next few hours as we process existing builds.Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 16:47 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 16:40 UTC Update - We are continuing to work on a fix for this issue.Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

Jobs Queueing

Mar 27, 16:21 UTC Investigating - We are investigating the issue, the impact is that some jobs are being impacted across all build types.

Last Update: A few months ago

UI inaccessible

Mar 17, 00:54 UTC Resolved - This incident has been resolved.Mar 17, 00:43 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 17, 00:35 UTC Update - We have intermittent speed and load issue and now on investigation.Mar 17, 00:27 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 17, 00:43 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 17, 00:35 UTC Update - We have intermittent speed and load issue and now on investigation.Mar 17, 00:27 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 17, 00:35 UTC Update - We have intermittent speed and load issue and now on investigation.Mar 17, 00:27 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 17, 00:27 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Builds are failing for some customers on Performance Plans

Mar 11, 20:49 UTC Resolved - This incident has been resolved, all build should be back to normal.Mar 11, 20:46 UTC Monitoring - A fix has been implemented and we are now seeing previously blocked builds on Custom Performance and Performance Plans beginning to recover.Mar 11, 20:45 UTC Identified - We have identified the reason why some customers on Custom Performance Plans and Performance Plans are experiencing issues due to a recent change.Mar 11, 20:41 UTC Investigating - We are currently investigating why some customers on the Performance Plan and Customer Performance Plans are blocked from building.

Last Update: A few months ago

Builds are failing for some customers on Performance Plans

Mar 11, 20:46 UTC Monitoring - A fix has been implemented and we are now seeing previously blocked builds on Custom Performance and Performance Plans beginning to recover.Mar 11, 20:45 UTC Identified - We have identified the reason why some customers on Custom Performance Plans and Performance Plans are experiencing issues due to a recent change.Mar 11, 20:41 UTC Investigating - We are currently investigating why some customers on the Performance Plan and Customer Performance Plans are blocked from building.

Last Update: A few months ago

Builds are failing for some customers on Performance Plans

Mar 11, 20:45 UTC Identified - We have identified the reason why some customers on Custom Performance Plans and Performance Plans are experiencing issues due to a recent change.Mar 11, 20:41 UTC Investigating - We are currently investigating why some customers on the Performance Plan and Customer Performance Plans are blocked from building.

Last Update: A few months ago

Builds are failing for some customers on Performance Plans

Mar 11, 20:41 UTC Investigating - We are currently investigating why some customers on the Performance Plan and Customer Performance Plans are blocked from building.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 20:38 UTC Resolved - This incident is resolved. macOS builds are now processing normally. All previously impacted components are fully operational.Mar 10, 19:44 UTC Update - We are seeing queue times have recovered for Docker, Machine, and Workflows. For macOS, jobs utilizing resource_class large have recovered. macOS jobs utilizing resource_class medium will continue to experience long queue times while we work through a backlog.Mar 10, 18:46 UTC Monitoring - We have deployed a fix and are currently monitoring the results. We have scaled up our systems to work through the current backlog. There will be a longer recovery time for the macOS fleet at this time.Mar 10, 18:34 UTC Identified - We have identified the cause of long build queue times and we are currently implementing a fix.Mar 10, 18:32 UTC Update - We are continuing to investigate the cause of slow builds.Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 19:44 UTC Update - We are seeing queue times have recovered for Docker, Machine, and Workflows. For macOS, jobs utilizing resource_class large have recovered. macOS jobs utilizing resource_class medium will continue to experience long queue times while we work through a backlog.Mar 10, 18:46 UTC Monitoring - We have deployed a fix and are currently monitoring the results. We have scaled up our systems to work through the current backlog. There will be a longer recovery time for the macOS fleet at this time.Mar 10, 18:34 UTC Identified - We have identified the cause of long build queue times and we are currently implementing a fix.Mar 10, 18:32 UTC Update - We are continuing to investigate the cause of slow builds.Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 19:44 UTC Update - We are seeing queue times have recovered for Docker, Machine, and Workflows. For MacOS, jobs utilizing resource_class large have recovered. MacOS jobs utilizing resource_class medium will continue to experience long queue times while we work through a backlog.Mar 10, 18:46 UTC Monitoring - We have deployed a fix and are currently monitoring the results. We have scaled up our systems to work through the current backlog. There will be a longer recovery time for the MacOS fleet at this time.Mar 10, 18:34 UTC Identified - We have identified the cause of long build queue times and we are currently implementing a fix.Mar 10, 18:32 UTC Update - We are continuing to investigate the cause of slow builds.Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 18:46 UTC Monitoring - We have deployed a fix and are currently monitoring the results. We have scaled up our systems to work through the current backlog. There will be a longer recovery time for the MacOS fleet at this time.Mar 10, 18:34 UTC Identified - We have identified the cause of long build queue times and we are currently implementing a fix.Mar 10, 18:32 UTC Update - We are continuing to investigate the cause of slow builds.Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 18:34 UTC Identified - We have identified the cause of long build queue times and we are currently implementing a fix.Mar 10, 18:32 UTC Update - We are continuing to investigate the cause of slow builds.Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 18:32 UTC Update - We are continuing to investigate the cause of slow builds.Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 18:08 UTC Update - Our teams are continuing to investigate the issue causing builds to run slow. We will update our status page every 20 minutes until the incident has been resolved.Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Build Queue Times

Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Long Job Queue Times

Mar 10, 17:41 UTC Investigating - We are currently investigating an issue causing all jobs to queue and long build queue times.

Last Update: A few months ago

Failed to Run Scheduled Workflows Due to a Bad Deploy

Mar 10, 16:36 UTC Resolved - Between 12:36 (UTC ) and 13:04 (UTC ), we failed to run a portion of scheduled workflows due to a bad deploy. Please push a new commit to scheduled it again.

Last Update: A few months ago

UI inaccessible and GitHub Checks Degraded

Mar 10, 03:32 UTC Resolved - This incident has been resolved.Mar 10, 03:28 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 10, 03:21 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible and GitHub Checks Degraded

Mar 10, 03:28 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 10, 03:21 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible and GitHub Checks Degraded

Mar 10, 03:21 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 8, 22:40 UTC Resolved - This incident has been resolved.Mar 8, 22:31 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 8, 22:21 UTC Identified - The issue has been identified and a fix is being implemented.Mar 8, 22:14 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 8, 22:31 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 8, 22:21 UTC Identified - The issue has been identified and a fix is being implemented.Mar 8, 22:14 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 8, 22:21 UTC Identified - The issue has been identified and a fix is being implemented.Mar 8, 22:14 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

UI inaccessible

Mar 8, 22:14 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 23:42 UTC Resolved - We have resolved this incident and all systems are fully operational now.Mar 4, 22:59 UTC Monitoring - We have deployed a fix and are currently monitoring the results.Mar 4, 22:29 UTC Update - We are continuing to investigate to identify specific causes.Mar 4, 22:07 UTC Update - We are continuing to investigate and attempt to identify specific causes.Mar 4, 21:41 UTC Update - Currently evaluating the impact of the added 10m Docker image pull timeout and investigating other potential contributors to the issues we’re experiencing.Mar 4, 21:20 UTC Update - We have applied a maximum time of 10 minutes to individual pulls of docker images to reduce the impact of the issues we’re experiencing.Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 22:59 UTC Monitoring - We have deployed a fix and are currently monitoring the results.Mar 4, 22:29 UTC Update - We are continuing to investigate to identify specific causes.Mar 4, 22:07 UTC Update - We are continuing to investigate and attempt to identify specific causes.Mar 4, 21:41 UTC Update - Currently evaluating the impact of the added 10m Docker image pull timeout and investigating other potential contributors to the issues we’re experiencing.Mar 4, 21:20 UTC Update - We have applied a maximum time of 10 minutes to individual pulls of docker images to reduce the impact of the issues we’re experiencing.Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 22:29 UTC Update - We are continuing to investigate to identify specific causes.Mar 4, 22:07 UTC Update - We are continuing to investigate and attempt to identify specific causes.Mar 4, 21:41 UTC Update - Currently evaluating the impact of the added 10m Docker image pull timeout and investigating other potential contributors to the issues we’re experiencing.Mar 4, 21:20 UTC Update - We have applied a maximum time of 10 minutes to individual pulls of docker images to reduce the impact of the issues we’re experiencing.Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 22:07 UTC Update - We are continuing to investigate and attempt to identify specific causes.Mar 4, 21:41 UTC Update - Currently evaluating the impact of the added 10m Docker image pull timeout and investigating other potential contributors to the issues we’re experiencing.Mar 4, 21:20 UTC Update - We have applied a maximum time of 10 minutes to individual pulls of docker images to reduce the impact of the issues we’re experiencing.Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 21:41 UTC Update - Currently evaluating the impact of the added 10m Docker image pull timeout and investigating other potential contributors to the issues we’re experiencing.Mar 4, 21:20 UTC Update - We have applied a maximum time of 10 minutes to individual pulls of docker images to reduce the impact of the issues we’re experiencing.Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 21:20 UTC Update - We have applied a maximum time of 10 minutes to individual pulls of docker images to reduce the impact of the issues we’re experiencing.Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 20:56 UTC Update - We are continuing to work on a fix for this issue.Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 20:19 UTC Identified - The issue has been identified and a fix is being implemented.Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 19:59 UTC Update - We are continuing to investigate this issue.Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

Issue with some Docker jobs failing to start

Mar 4, 19:32 UTC Investigating - Investigating an issue causing some docker jobs failing to start

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 27, 16:25 UTC Resolved - Delays in jobs triggering and GitHub checks have been resolved, we are continuing to monitor but customer impact has been resolved.Feb 27, 16:10 UTC Update - GitHub services are returning to normal, and time to trigger jobs or workflows are returning to normal. Delays for GitHub Checks should also be returning to normal.Feb 27, 15:35 UTC Update - We are monitoring of the GitHub API and webhook deliveries. Workflow and job triggering is delayed, along with other GitHub API reliant actions.Feb 27, 15:14 UTC Update - Workflows and jobs continue to be delayed, along with GitHub Checks and other interactions with the GitHub API.Feb 27, 14:46 UTC Update - We are continuing to monitor the status of the GitHub API and webhook deliveries.Feb 27, 14:23 UTC Monitoring - Workflow and job triggering is delayed, GitHub Checks are queued.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 27, 16:10 UTC Update - GitHub services are returning to normal, and time to trigger jobs or workflows are returning to normal. Delays for GitHub Checks should also be returning to normal.Feb 27, 15:35 UTC Update - We are monitoring of the GitHub API and webhook deliveries. Workflow and job triggering is delayed, along with other GitHub API reliant actions.Feb 27, 15:14 UTC Update - Workflows and jobs continue to be delayed, along with GitHub Checks and other interactions with the GitHub API.Feb 27, 14:46 UTC Update - We are continuing to monitor the status of the GitHub API and webhook deliveries.Feb 27, 14:23 UTC Monitoring - Workflow and job triggering is delayed, GitHub Checks are queued.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 27, 15:35 UTC Update - We are monitoring of the GitHub API and webhook deliveries. Workflow and job triggering is delayed, along with other GitHub API reliant actions.Feb 27, 15:14 UTC Update - Workflows and jobs continue to be delayed, along with GitHub Checks and other interactions with the GitHub API.Feb 27, 14:46 UTC Update - We are continuing to monitor the status of the GitHub API and webhook deliveries.Feb 27, 14:23 UTC Monitoring - Workflow and job triggering is delayed, GitHub Checks are queued.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 27, 15:14 UTC Update - Workflows and jobs continue to be delayed, along with GitHub Checks and other interactions with the GitHub API.Feb 27, 14:46 UTC Update - We are continuing to monitor the status of the GitHub API and webhook deliveries.Feb 27, 14:23 UTC Monitoring - Workflow and job triggering is delayed, GitHub Checks are queued.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 27, 14:46 UTC Update - We are continuing to monitor the status of the GitHub API and webhook deliveries.Feb 27, 14:23 UTC Monitoring - Workflow and job triggering is delayed, GitHub Checks are queued.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 27, 14:23 UTC Monitoring - Workflow and job triggering is delayed, GitHub Checks are queued.

Last Update: A few months ago

UI inaccessible and GitHub Checks Degraded

Feb 26, 03:04 UTC Resolved - UI is now at normal levels.Feb 26, 01:27 UTC Update - We are continuing to work on a fix for this issue impacting the UI.Feb 26, 01:00 UTC Update - We are continuing to implement a fix for this issue impacting the UI.Feb 26, 00:35 UTC Update - We are continuing to work on a complete fix impacting the UI and are working on implementing a solution.Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI inaccessible and GitHub Checks Degraded

Feb 26, 01:27 UTC Update - We are continuing to work on a fix for this issue impacting the UI.Feb 26, 01:00 UTC Update - We are continuing to implement a fix for this issue impacting the UI.Feb 26, 00:35 UTC Update - We are continuing to work on a complete fix impacting the UI and are working on implementing a solution.Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 26, 01:27 UTC Update - We are continuing to work on a fix for this issue impacting the UI.Feb 26, 01:00 UTC Update - We are continuing to implement a fix for this issue impacting the UI.Feb 26, 00:35 UTC Update - We are continuing to work on a complete fix impacting the UI and are working on implementing a solution.Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 26, 01:00 UTC Update - We are continuing to implement a fix for this issue impacting the UI.Feb 26, 00:35 UTC Update - We are continuing to work on a complete fix impacting the UI and are working on implementing a solution.Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 26, 00:35 UTC Update - We are continuing to work on a complete fix impacting the UI and are working on implementing a solution.Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 26, 00:21 UTC Update - We are continuing to work on a complete fix impacting the UI. The GitHub checks backlog has cleared, are currently processing normally and fully operational at this time.Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 25, 23:55 UTC Identified - Our teams have identified the issue and are deploying a fix. Our systems are working through a backlog.Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

UI Inaccessible

Feb 25, 23:28 UTC Investigating - We are currently investigating an issue impacting our UI where it is inaccessible.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 19:56 UTC Resolved - GitHub Webhooks and API processing has returned to normal.Feb 25, 19:28 UTC Update - GitHub checks posting is no longer delayed. Some checks have been lost as a result of the GitHub outage.Feb 25, 19:08 UTC Update - Checks are still delayed, GitHub are rate-limiting us so we can't churn through the queue as fast as we want.Feb 25, 18:40 UTC Update - Monitoring GitHub recovery, checks still delayed.Feb 25, 18:15 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 19:28 UTC Update - GitHub checks posting is no longer delayed. Some checks have been lost as a result of the GitHub outage.Feb 25, 19:08 UTC Update - Checks are still delayed, GitHub are rate-limiting us so we can't churn through the queue as fast as we want.Feb 25, 18:40 UTC Update - Monitoring GitHub recovery, checks still delayed.Feb 25, 18:15 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 19:08 UTC Update - Checks are still delayed, GitHub are rate-limiting us so we can't churn through the queue as fast as we want.Feb 25, 18:40 UTC Update - Monitoring GitHub recovery, checks still delayed.Feb 25, 18:15 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 18:40 UTC Update - Monitoring GitHub recovery, checks still delayed.Feb 25, 18:15 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 18:40 UTC Update - Monitoring GitHub recovery, check still delayed.Feb 25, 18:15 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 18:15 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 17:50 UTC Update - GitHub-related actions may be impacted due to GitHub API delays, such as fetching commits during jobs and adding projects.Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 17:19 UTC Update - We are continuing to monitor. Repository related API calls are delayed, workflows and webhooks are returning to normal.Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

GitHub Webhooks and API degraded

Feb 25, 16:50 UTC Monitoring - Workflows and GitHub Checks are delayed.

Last Update: A few months ago

Database Maintenance

Feb 23, 03:55 UTC Completed - The scheduled maintenance has been completed. All systems should be functioning normally.Thank you for your patience.Feb 23, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 8, 01:17 UTC Scheduled - CircleCI will be performing database maintenance on Sunday, Feb 23, 2020, between the hours of 00:00 UTC and 04:00 UTC (Saturday, Feb 22, 2020 between 16:00 PST and 20:00 PST).During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Database Maintenance

Feb 23, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 8, 01:17 UTC Scheduled - CircleCI will be performing database maintenance on Sunday, Feb 23, 2020, between the hours of 00:00 UTC and 04:00 UTC (Saturday, Feb 22, 2020 between 16:00 PST and 20:00 PST).During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 17:06 UTC Resolved - All executor job queues have returned to normal.Feb 12, 17:00 UTC Update - macOS customers may still experience some delays due to contention.Feb 12, 16:48 UTC Monitoring - Job queues have returned to their usual levels, customers with limited concurrency may still experience some delay.Feb 12, 16:34 UTC Update - Queue times are close to returning to normal levels and we are continuing to work on reducing them.Feb 12, 16:14 UTC Update - We are continuing efforts to reduce job queuing.Feb 12, 15:51 UTC Identified - We have resolved the underlying issue with our datastore, and job processing should be returning to normal.Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 17:00 UTC Update - macOS customers may still experience some delays due to contention.Feb 12, 16:48 UTC Monitoring - Job queues have returned to their usual levels, customers with limited concurrency may still experience some delay.Feb 12, 16:34 UTC Update - Queue times are close to returning to normal levels and we are continuing to work on reducing them.Feb 12, 16:14 UTC Update - We are continuing efforts to reduce job queuing.Feb 12, 15:51 UTC Identified - We have resolved the underlying issue with our datastore, and job processing should be returning to normal.Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 16:48 UTC Monitoring - Job queues have returned to their usual levels, customers with limited concurrency may still experience some delay.Feb 12, 16:34 UTC Update - Queue times are close to returning to normal levels and we are continuing to work on reducing them.Feb 12, 16:14 UTC Update - We are continuing efforts to reduce job queuing.Feb 12, 15:51 UTC Identified - We have resolved the underlying issue with our datastore, and job processing should be returning to normal.Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 16:34 UTC Update - Queue times are close to returning to normal levels and we are continuing to work on reducing them.Feb 12, 16:14 UTC Update - We are continuing efforts to reduce job queuing.Feb 12, 15:51 UTC Identified - We have resolved the underlying issue with our datastore, and job processing should be returning to normal.Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 16:14 UTC Update - We are continuing efforts to reduce job queuing.Feb 12, 15:51 UTC Identified - We have resolved the underlying issue with our datastore, and job processing should be returning to normal.Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 15:51 UTC Identified - We have resolved the underlying issue with our datastore, and job processing should be returning to normal.Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 15:48 UTC Update - We are continuing to investigate an issue with our underlying datastore. Delays may be up to 9 minutes per job.Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 15:35 UTC Investigating - We are currently investigating the cause of delayed jobs.Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Job Processing Delayed

Feb 12, 15:15 UTC Identified - Jobs are taking longer than expected to process.

Last Update: A few months ago

Delayed Workflows

Feb 12, 14:22 UTC Resolved - Workflow processing has returned to normalFeb 12, 14:12 UTC Monitoring - Workflow processing is returning to normalFeb 12, 14:08 UTC Identified - Workflows may be delayed - we have identified the cause and a fix is being implemented.

Last Update: A few months ago

Delayed Workflows

Feb 12, 14:12 UTC Monitoring - Workflow processing is returning to normalFeb 12, 14:08 UTC Identified - Workflows may be delayed - we have identified the cause and a fix is being implemented.

Last Update: A few months ago

Delayed Workflows

Feb 12, 14:08 UTC Identified - Workflows may be delayed - we have identified the cause and a fix is being implemented.

Last Update: A few months ago

Database Maintenance

THIS IS A SCHEDULED EVENT Feb 23, 00:00 - 04:00 UTC Feb 8, 01:17 UTC Scheduled - CircleCI will be performing database maintenance on Sunday, Feb 23, 2020, between the hours of 00:00 UTC and 04:00 UTC (Saturday, Feb 22, 2020 between 16:00 PST and 20:00 PST).During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete.While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.Thank you for your patience and support.

Last Update: A few months ago

Database Maintenance

Feb 1, 21:38 UTC Completed - The scheduled maintenance has been completed.Feb 1, 21:38 UTC Verifying - Verification is currently underway for the maintenance items.Feb 1, 21:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 29, 21:05 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, Feb 1, 2020, between the hours of 21:00 UTC and 23:00 UTC (4 PM ET/1 PM PT to 6PM ET/3PM PT). During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work. For more information about this event, please see https://discuss.circleci.com/t/upcoming-scheduled-maintenance-saturday-february-1-2020-from-21-00-utc-to-23-00-utc/34277

Last Update: A few months ago

Database Maintenance

Feb 1, 21:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 29, 21:05 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, Feb 1, 2020, between the hours of 21:00 UTC and 23:00 UTC (4 PM ET/1 PM PT to 6PM ET/3PM PT). During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work. For more information about this event, please see https://discuss.circleci.com/t/upcoming-scheduled-maintenance-saturday-february-1-2020-from-21-00-utc-to-23-00-utc/34277

Last Update: A few months ago

Inconsistencies with build status and output between 12:50 to 15:28 UTC

Jan 30, 12:50 UTC Resolved - An issue with deployment and monitoring failures resulted in inconsistencies with build status and output between 12:50 to 15:28 UTC .

Last Update: A few months ago

Inconsistencies with build status and output between 19:40 and 21:50 UTC

Jan 28, 19:40 UTC Resolved - An issue with deployment and monitoring failures resulted in inconsistencies with build status and output between 19:40 and 21:50 UTC .

Last Update: A few months ago

Database Maintenance

THIS IS A SCHEDULED EVENT Feb 1, 21:00 - 23:00 UTC Jan 29, 21:05 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, Feb 1, 2019, between the hours of 21:00 UTC and 23:00 UTC (4 PM ET/1 PM PT to 6PM ET/3PM PT). During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work. For more information about this event, please see https://discuss.circleci.com/t/upcoming-scheduled-maintenance-saturday-february-1-2020-from-21-00-utc-to-23-00-utc/34277

Last Update: A few months ago

Networking outage between 17:40 ad 17:50 UTC

Jan 22, 17:40 UTC Resolved - A networking outage between 17:40 ad 17:50 UTC caused some builds to be lost or failed, all systems should now be operational.

Last Update: A few months ago

Networking outage between 17:40 ad 17:50 UTC

Jan 22, 17:30 UTC Resolved - A networking outage between 17:40 ad 17:50 UTC caused some builds to be lost or failed, all systems should now be operational.

Last Update: A few months ago

Slow checkout, cache, and artifact downloads on macOS

Jan 8, 16:27 UTC Resolved - This incident is now resolved and the macOS network is fully operational.Jan 8, 05:07 UTC Monitoring - Our datacenter provider has completed their maintenance on the macOS network. We will continue to monitor network performance.Jan 8, 00:41 UTC Update - Due to the network issues related to this incident our datacenter provider will be conducting maintenance on the macOS network between 0400 to 0500 UTC on Wednesday January 8, 2020. While some impacted users have seen improvement to their macOS builds, during the maintenance there may be slower than normal checkout, caching, and artifact downloads on macOS. A separate notification for the schedule maintenance will be sent.Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

Slow checkout, cache, and artifact downloads on macOS

Jan 8, 05:07 UTC Monitoring - Our datacenter provider has completed their maintenance on the macOS network. We will continue to monitor network performance.Jan 8, 00:41 UTC Update - Due to the network issues related to this incident our datacenter provider will be conducting maintenance on the macOS network between 0400 to 0500 UTC on Wednesday January 8, 2020. While some impacted users have seen improvement to their macOS builds, during the maintenance there may be slower than normal checkout, caching, and artifact downloads on macOS. A separate notification for the schedule maintenance will be sent.Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

macOS Network Maintenance

Jan 8, 05:00 UTC Completed - The scheduled maintenance has been completed.Jan 8, 04:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 8, 00:30 UTC Scheduled - Due to the network issues related to this incident https://status.circleci.com/incidents/7wzsmx0zwt3m, our datacenter provider will be conducting maintenance on the macOS network between 0400 to 0500 UTC . Slower than normal checkout times may be experienced during the maintenance. We will update our Status page once the maintenance is completed.

Last Update: A few months ago

macOS Network Maintenance

Jan 8, 04:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 8, 00:30 UTC Scheduled - Due to the network issues related to this incident https://status.circleci.com/incidents/7wzsmx0zwt3m, our datacenter provider will be conducting maintenance on the macOS network between 0400 to 0500 UTC . Slower than normal checkout times may be experienced during the maintenance. We will update our Status page once the maintenance is completed.

Last Update: A few months ago

Slow checkout, cache, and artifact downloads on macOS

Jan 8, 00:41 UTC Update - Due to the network issues related to this incident our datacenter provider will be conducting maintenance on the macOS network between 0400 to 0500 UTC on Wednesday January 8, 2020. While some impacted users have seen improvement to their macOS builds, during the maintenance there may be slower than normal checkout, caching, and artifact downloads on macOS. A separate notification for the schedule maintenance will be sent.Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

macOS Network Maintenance

THIS IS A SCHEDULED EVENT Jan 8, 04:00 - 05:00 UTC Jan 8, 00:30 UTC Scheduled - Due to the network issues related to this incident https://status.circleci.com/incidents/7wzsmx0zwt3m, our datacenter provider will be conducting maintenance on the macOS network between 0400 to 0500 UTC . Slower than normal checkout times may be experienced during the maintenance. We will update our Status page once the maintenance is completed.

Last Update: A few months ago

Slow checkout, cache, and artifact downloads on macOS

Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

Slow checkout, cache, and artifact downloads on macOS

Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

Slow checkout, cache, and artifact downloads on MacOS

Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

Slow clone times from GitHub for MacOS jobs

Jan 7, 21:46 UTC Identified - We are aware of a network speed degradation on our macOS executors particularly impacting clone times from GitHub. We are working with our datacenter provider to resolve this issue as soon as possible. You can subscribe to this incident on our status page for future updates until this is resolved.

Last Update: A few months ago

Worklows delayed

Jan 7, 11:00 UTC Resolved - Workflows which attempted to run between 10:57 UTC and 11:20 UTC were delayed from running. This is resolved and all workflows should now be running as expected.

Last Update: A few months ago

500 errors encountered during Artifacts upload/retrieval. API also impacted.

Dec 31, 13:30 UTC Resolved - From 13:47 UTC to 14:38 UTC , some users encountered 5xx error in their builds during Artifact upload or while attempting to download Artifacts. Also during this time some users encountered issues while attempting to access our API. A fix was deployed at 14:38 UTC on December 31, 2019 and all systems are fully operational.

Last Update: A few months ago

Configuration parsing failure

Dec 18, 11:35 UTC Resolved - Due to a version incompatibility between internal services, parsing of CircleCI configuration failed for all pipelines between 11:35 and 11:48.Any pipelines that failed will need to be re-run via a fresh commit or the pipeline trigger API.

Last Update: A few months ago

Configuration parsing failure

Dec 18, 11:30 UTC Resolved - Due to a version incompatibility between internal services, parsing of CircleCI configuration failed for all builds between 11:35 and 11:48.

Last Update: A few months ago

GitHub Webhooks and API outage

Dec 13, 17:58 UTC Resolved - GitHub interactions have returned to normal.Dec 13, 17:37 UTC Monitoring - We are continuing to monitor GitHub services, which are returning to normal.Dec 13, 17:18 UTC Update - GitHub is continuing their efforts, which can be tracked on their status page: https://www.githubstatus.com/incidents/4g79klqzy99s. Git services are affected, as well as API calls, which may cause builds to fail.Dec 13, 16:52 UTC Identified - We are seeing a significant drop in webhooks being sent to us by GitHub. We are also seeing failures when attempting to set commit statuses or GitHub Checks via their API.

Last Update: A few months ago

GitHub Webhooks and API outage

Dec 13, 17:37 UTC Monitoring - We are continuing to monitor GitHub services, which are returning to normal.Dec 13, 17:18 UTC Update - GitHub is continuing their efforts, which can be tracked on their status page: https://www.githubstatus.com/incidents/4g79klqzy99s. Git services are affected, as well as API calls, which may cause builds to fail.Dec 13, 16:52 UTC Identified - We are seeing a significant drop in webhooks being sent to us by GitHub. We are also seeing failures when attempting to set commit statuses or GitHub Checks via their API.

Last Update: A few months ago

GitHub Webhooks and API outage

Dec 13, 17:18 UTC Update - GitHub is continuing their efforts, which can be tracked on their status page: https://www.githubstatus.com/incidents/4g79klqzy99s. Git services are affected, as well as API calls, which may cause builds to fail.Dec 13, 16:52 UTC Identified - We are seeing a significant drop in webhooks being sent to us by GitHub. We are also seeing failures when attempting to set commit statuses or GitHub Checks via their API.

Last Update: A few months ago

GitHub Webhooks and API outage

Dec 13, 16:52 UTC Identified - We are seeing a significant drop in webhooks being sent to us by GitHub. We are also seeing failures when attempting to set commit statuses or GitHub Checks via their API.

Last Update: A few months ago

GitHub Webhooks and API outage

Dec 13, 16:52 UTC Identified - We are seeing a significant drop in webhooks being sent to us by GitHub. We are also seeing failures when attempting to set commit statuses ors GitHub Checks via their API.

Last Update: A few months ago

MacOS Network Maintenance

Dec 6, 01:33 UTC Completed - The scheduled maintenance has been completed.Dec 6, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Dec 5, 21:19 UTC Scheduled - We will conduct network maintenance to expand the capacity of our MacOS environment. No user impact is anticipated.

Last Update: A few months ago

MacOS Network Maintenance

Dec 6, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Dec 5, 21:19 UTC Scheduled - We will conduct network maintenance to expand the capacity of our MacOS environment. No user impact is anticipated.

Last Update: A few months ago

MacOS Network Maintenance

THIS IS A SCHEDULED EVENT Dec 6, 01:00 - 02:00 UTC Dec 5, 21:19 UTC Scheduled - We will conduct network maintenance to expand the capacity of our MacOS environment. No user impact is anticipated.

Last Update: A few months ago

An error with cache restores may have caused jobs to fail

Dec 5, 15:00 UTC Resolved - Between 14:49 - 15:16 UTC jobs may have failed due to an error with cache restores. We have identified the issue and and implemented a fix.

Last Update: A few months ago

Workflows are not being started

Dec 4, 20:35 UTC Resolved - This incident has been resolved.Dec 4, 20:16 UTC Monitoring - A fix has been implemented and we are monitoring the results.Dec 4, 19:46 UTC Update - We are continuing to investigate this issue.Dec 4, 19:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Workflows are not being started

Dec 4, 20:16 UTC Monitoring - A fix has been implemented and we are monitoring the results.Dec 4, 19:46 UTC Update - We are continuing to investigate this issue.Dec 4, 19:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Workflows are not being started

Dec 4, 19:46 UTC Update - We are continuing to investigate this issue.Dec 4, 19:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Workflows are not being started

Dec 4, 19:10 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Workflows are currently not running and Workflows UI is unavailable

Nov 25, 16:12 UTC Resolved - This incident has been resolved, workflows and UI have returned to normal.Nov 25, 16:01 UTC Update - We are continuing to monitor for any further issues.Nov 25, 15:56 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 25, 15:53 UTC Identified - We have identified the issue and are currently working on a fix.Nov 25, 15:40 UTC Update - We're still investigating several issues. Workflows are not running. The workflows UI is unavailable. v2 API endpoints that interact with Workflows are not working.Nov 25, 15:06 UTC Investigating - We're investigating an issue where workflows are unable to start, and the workflows UI is unavailable.

Last Update: A few months ago

Workflows are currently not running and Workflows UI is unavailable

Nov 25, 16:01 UTC Update - We are continuing to monitor for any further issues.Nov 25, 15:56 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 25, 15:53 UTC Identified - We have identified the issue and are currently working on a fix.Nov 25, 15:40 UTC Update - We're still investigating several issues. Workflows are not running. The workflows UI is unavailable. v2 API endpoints that interact with Workflows are not working.Nov 25, 15:06 UTC Investigating - We're investigating an issue where workflows are unable to start, and the workflows UI is unavailable.

Last Update: A few months ago

Workflows are currently not running and Workflows UI is unavailable

Nov 25, 15:56 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 25, 15:53 UTC Identified - We have identified the issue and are currently working on a fix.Nov 25, 15:40 UTC Update - We're still investigating several issues. Workflows are not running. The workflows UI is unavailable. v2 API endpoints that interact with Workflows are not working.Nov 25, 15:06 UTC Investigating - We're investigating an issue where workflows are unable to start, and the workflows UI is unavailable.

Last Update: A few months ago

Workflows are currently not running and Workflows UI is unavailable

Nov 25, 15:53 UTC Identified - We have identified the issue and are currently working on a fix.Nov 25, 15:40 UTC Update - We're still investigating several issues. Workflows are not running. The workflows UI is unavailable. v2 API endpoints that interact with Workflows are not working.Nov 25, 15:06 UTC Investigating - We're investigating an issue where workflows are unable to start, and the workflows UI is unavailable.

Last Update: A few months ago

Workflows are currently not running and Workflows UI is unavailable

Nov 25, 15:40 UTC Update - We're still investigating several issues. Workflows are not running. The workflows UI is unavailable. v2 API endpoints that interact with Workflows are not working.Nov 25, 15:06 UTC Investigating - We're investigating an issue where workflows are unable to start, and the workflows UI is unavailable.

Last Update: A few months ago

Workflows are currently not running and Workflows UI is unavailable

Nov 25, 15:06 UTC Investigating - We're investigating an issue where workflows are unable to start, and the workflows UI is unavailable.

Last Update: A few months ago

MacOS Builds Queueing

Nov 22, 00:55 UTC Resolved - This incident has been resolved, all queued macOS jobs are back at regular levels.Nov 22, 00:44 UTC Update - A fix has been implemented regarding some queued macOS jobs and we are monitoring the results.Nov 22, 00:09 UTC Monitoring - A fix has been implemented regarding some queued macOS jobs and we are monitoring the results.Nov 21, 23:30 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:46 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:25 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 22, 00:44 UTC Update - A fix has been implemented regarding some queued macOS jobs and we are monitoring the results.Nov 22, 00:09 UTC Monitoring - A fix has been implemented regarding some queued macOS jobs and we are monitoring the results.Nov 21, 23:30 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:46 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:25 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 22, 00:09 UTC Monitoring - A fix has been implemented regarding some queued macOS jobs and we are monitoring the results.Nov 21, 23:30 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:46 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:25 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 23:30 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:46 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:25 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 22:46 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 22:25 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 22:25 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 21:57 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 21:38 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 21:16 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 20:55 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 20:32 UTC Update - Our engineers are continuing to investigate the issue of MacOS builds queueing due to "waiting for VM assignment".Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 19:48 UTC Update - We are continuing to investigate this issue.Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

MacOS Builds Queueing

Nov 21, 19:16 UTC Investigating - We are currently investigating an issue where MacOS builds are queueing for some customers.

Last Update: A few months ago

Builds Queueing

Nov 19, 01:32 UTC Resolved - This incident has been resolved. All our systems are fully operational.Nov 19, 01:12 UTC Monitoring - A fix has been implemented and we are currently monitoring the results.Nov 19, 01:04 UTC Identified - We've identified an issue with memory usage in our RabbitMQ cluster. We've disabled a plugin causing excessive memory usage and all systems are currently operational.Nov 19, 00:53 UTC Update - We are continuing to investigate this issue.Nov 19, 00:30 UTC Investigating - We are continuing our investigation into the issue of Builds Queueing.Nov 19, 00:09 UTC Identified - The issue has been identified and a fix has been implemented.Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

Builds Queueing

Nov 19, 01:12 UTC Monitoring - A fix has been implemented and we are currently monitoring the results.Nov 19, 01:04 UTC Identified - We've identified an issue with memory usage in our RabbitMQ cluster. We've disabled a plugin causing excessive memory usage and all systems are currently operational.Nov 19, 00:53 UTC Update - We are continuing to investigate this issue.Nov 19, 00:30 UTC Investigating - We are continuing our investigation into the issue of Builds Queueing.Nov 19, 00:09 UTC Identified - The issue has been identified and a fix has been implemented.Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

Builds Queueing

Nov 19, 01:04 UTC Identified - We've identified an issue with memory usage in our RabbitMQ cluster. We've disabled a plugin causing excessive memory usage and all systems are currently operational.Nov 19, 00:53 UTC Update - We are continuing to investigate this issue.Nov 19, 00:30 UTC Investigating - We are continuing our investigation into the issue of Builds Queueing.Nov 19, 00:09 UTC Identified - The issue has been identified and a fix has been implemented.Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

Builds Queueing

Nov 19, 00:53 UTC Update - We are continuing to investigate this issue.Nov 19, 00:30 UTC Investigating - We are continuing our investigation into the issue of Builds Queueing.Nov 19, 00:09 UTC Identified - The issue has been identified and a fix has been implemented.Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

Builds Queueing

Nov 19, 00:30 UTC Investigating - We are continuing our investigation into the issue of Builds Queueing.Nov 19, 00:09 UTC Identified - The issue has been identified and a fix has been implemented.Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

Builds Queueing

Nov 19, 00:09 UTC Identified - The issue has been identified and a fix has been implemented.Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

Builds Queueing

Nov 18, 23:49 UTC Investigating - We are currently investigation an issue with builds queueing.

Last Update: A few months ago

502 Error when Downloading Artifacts

Nov 12, 14:16 UTC Resolved - We have resolved this incident and artifact downloads have returned to normal.Nov 12, 14:05 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 12, 14:00 UTC Identified - We have scaled up to better handle the high request volume for artifact downloadsNov 12, 13:42 UTC Update - Our investigation into the root of this issue is ongoing.Nov 12, 13:19 UTC Update - Our engineers are continuing to investigate this issue.Nov 12, 12:55 UTC Investigating - We have become aware of errors when downloading job artifacts and are investigating.

Last Update: A few months ago

502 Error when Downloading Artifacts

Nov 12, 14:05 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 12, 14:00 UTC Identified - We have scaled up to better handle the high request volume for artifact downloadsNov 12, 13:42 UTC Update - Our investigation into the root of this issue is ongoing.Nov 12, 13:19 UTC Update - Our engineers are continuing to investigate this issue.Nov 12, 12:55 UTC Investigating - We have become aware of errors when downloading job artifacts and are investigating.

Last Update: A few months ago

502 Error when Downloading Artifacts

Nov 12, 14:00 UTC Identified - We have scaled up to better handle the high request volume for artifact downloadsNov 12, 13:42 UTC Update - Our investigation into the root of this issue is ongoing.Nov 12, 13:19 UTC Update - Our engineers are continuing to investigate this issue.Nov 12, 12:55 UTC Investigating - We have become aware of errors when downloading job artifacts and are investigating.

Last Update: A few months ago

502 Error when Downloading Artifacts

Nov 12, 13:42 UTC Update - Our investigation into the root of this issue is ongoing.Nov 12, 13:19 UTC Update - Our engineers are continuing to investigate this issue.Nov 12, 12:55 UTC Investigating - We have become aware of errors when downloading job artifacts and are investigating.

Last Update: A few months ago

502 Error when Downloading Artifacts

Nov 12, 13:19 UTC Update - Our engineers are continuing to investigate this issue.Nov 12, 12:55 UTC Investigating - We have become aware of errors when downloading job artifacts and are investigating.

Last Update: A few months ago

502 Error when Downloading Artifacts

Nov 12, 12:55 UTC Investigating - We have become aware of errors when downloading job artifacts and are investigating.

Last Update: A few months ago

Delays in starting Machine Executor jobs

Nov 11, 11:00 UTC Resolved - Machine executor jobs may have been delayed or aborted between 10:46 - 11:04 UTC

Last Update: A few months ago

MacOS VM requests are failing to process

Oct 31, 21:08 UTC Resolved - This incident has been resolved and all macOS jobs have returned to normal.Oct 31, 18:49 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 31, 18:30 UTC Identified - The issue has been identified and a fix is being implemented.Oct 31, 17:59 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

MacOS VM requests are failing to process

Oct 31, 18:49 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 31, 18:30 UTC Identified - The issue has been identified and a fix is being implemented.Oct 31, 17:59 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

MacOS VM requests are failing to process

Oct 31, 18:30 UTC Identified - The issue has been identified and a fix is being implemented.Oct 31, 17:59 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

MacOS VM requests are failing to process

Oct 31, 17:59 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 18:44 UTC Resolved - This incident has been resolved.Oct 29, 18:30 UTC Update - We are continuing to monitor for any further issues.Oct 29, 18:09 UTC Update - We are continuing to monitor the results of our fix.Oct 29, 17:35 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 29, 17:14 UTC Update - Just an update here - We are continuing our investigation on identifying the cause of 504 Gateway Timeout errors.Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 18:30 UTC Update - We are continuing to monitor for any further issues.Oct 29, 18:09 UTC Update - We are continuing to monitor the results of our fix.Oct 29, 17:35 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 29, 17:14 UTC Update - Just an update here - We are continuing our investigation on identifying the cause of 504 Gateway Timeout errors.Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 18:09 UTC Update - We are continuing to monitor the results of our fix.Oct 29, 17:35 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 29, 17:14 UTC Update - Just an update here - We are continuing our investigation on identifying the cause of 504 Gateway Timeout errors.Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 18:09 UTC Update - We will continue to monitoring thisOct 29, 17:35 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 29, 17:14 UTC Update - Just an update here - We are continuing our investigation on identifying the cause of 504 Gateway Timeout errors.Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 17:35 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 29, 17:14 UTC Update - Just an update here - We are continuing our investigation on identifying the cause of 504 Gateway Timeout errors.Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 17:14 UTC Update - Just an update here - We are continuing our investigation on identifying the cause of 504 Gateway Timeout errors.Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 16:41 UTC Update - We are continuing to investigate the 504 Gateway Timeout errors.Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 16:07 UTC Update - Our engineers are continuing to investigate the cause of the 504 Gateway Timeout errors. Some users may experience slow loading of the UI when viewing Workflows or Build history.Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

504 Gateway Timeout Errors

Oct 29, 15:40 UTC Investigating - We are currently investigating 504 Gateway Timeout Errors while attempting to view Build and Workflow history.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 23:33 UTC Resolved - Our job queue has returned to normal levels, our engineers are working to clear our backlog for macOS jobs.Oct 28, 23:09 UTC Monitoring - Our backlog is now at normal levels. Due to limited capacity, our macOS jobs will experience delays due to backlogged jobs.Oct 28, 23:05 UTC Update - Our engineers have almost cleared our backlog, but due to limited capacity, our macOS jobs will still experience issues due to backlogged jobs.Oct 28, 22:17 UTC Update - Our engineers are currently working through the backlog and are actively working on improvements in builds queueing.Oct 28, 21:54 UTC Update - Our engineers have improved the throughput of our systems and are now working through the backlog more effectively.Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 23:09 UTC Monitoring - Our backlog is now at normal levels. Due to limited capacity, our macOS jobs will experience delays due to backlogged jobs.Oct 28, 23:05 UTC Update - Our engineers have almost cleared our backlog, but due to limited capacity, our macOS jobs will still experience issues due to backlogged jobs.Oct 28, 22:17 UTC Update - Our engineers are currently working through the backlog and are actively working on improvements in builds queueing.Oct 28, 21:54 UTC Update - Our engineers have improved the throughput of our systems and are now working through the backlog more effectively.Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 23:05 UTC Update - Our engineers have almost cleared our backlog, but due to limited capacity, our macOS jobs will still experience issues due to backlogged jobs.Oct 28, 22:17 UTC Update - Our engineers are currently working through the backlog and are actively working on improvements in builds queueing.Oct 28, 21:54 UTC Update - Our engineers have improved the throughput of our systems and are now working through the backlog more effectively.Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 23:05 UTC Update - Our engineers have almost cleared our backlog, but due to limited capacity, our macOS will still experience issues with backlogged jobs.Oct 28, 22:17 UTC Update - Our engineers are currently working through the backlog and are actively working on improvements in builds queueing.Oct 28, 21:54 UTC Update - Our engineers have improved the throughput of our systems and are now working through the backlog more effectively.Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 22:17 UTC Update - Our engineers are currently working through the backlog and are actively working on improvements in builds queueing.Oct 28, 21:54 UTC Update - Our engineers have improved the throughput of our systems and are now working through the backlog more effectively.Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 21:54 UTC Update - Our engineers have improved the throughput of our systems and are now working through the backlog more effectively.Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 21:16 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 20:40 UTC Update - Our engineers are still working to resolve this incident. Our systems are working through the backlog of queued builds.Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 20:18 UTC Identified - We have identified the issue and our engineers are currently deploying a fix. The backlog of builds queued is beginning to clear.Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 19:33 UTC Update - We are continuing to investigate. We have marked all potentially impacted components accordingly on our status page.Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 19:05 UTC Update - We are still investigating the cause of build queuing. Our engineers have stabilized our systems and some customers are able to see their builds running now. We will continue to update our status page.Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI - Builds Queueing

Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

502 errors in UI

Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

Some users unable to view build history or dashboard

Oct 28, 18:14 UTC Update - Our engineers are continuing to investigate why builds are queueing.Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

Some users unable to view build history or dashboard

Oct 28, 17:50 UTC Investigating - Some users are unable to access our UI. We are currently investigating.

Last Update: A few months ago

CircleC Web UI partial outage

Oct 28, 14:42 UTC Resolved - This incident has been resolved.Oct 28, 14:31 UTC Monitoring - A change has been deployed and the issue should be resolve. Web logins should be operational for all users.Oct 28, 14:08 UTC Investigating - Some users trying to login to CircleCI may experience errors. We're currently investigating this.

Last Update: A few months ago

CircleC Web UI partial outage

Oct 28, 14:31 UTC Monitoring - A change has been deployed and the issue should be resolve. Web logins should be operational for all users.Oct 28, 14:08 UTC Investigating - Some users trying to login to CircleCI may experience errors. We're currently investigating this.

Last Update: A few months ago

CircleC Web UI partial outage

Oct 28, 14:08 UTC Investigating - Some users trying to login to CircleCI may experience errors. We're currently investigating this.

Last Update: A few months ago

CircleC Web UI partial outage

Oct 28, 14:08 UTC Investigating - Some users using the new Beta Web UI are experiencing errors. We're currently investigating this.

Last Update: A few months ago

CircleCI Beta Web UI partial outage

Oct 28, 14:08 UTC Investigating - Some users using the new Beta Web UI are experiencing errors. We're currently investigating this.

Last Update: A few months ago

Some users unable to load login page

Oct 25, 19:30 UTC Resolved - Between 19:12 UTC and 19:19 UTC , some users may have experienced a "failed to load" while visiting app.circleci.com login page. This has been resolved and app.circleci.com is fully operational as of 19:19 UTC .

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 24, 19:43 UTC Resolved - This incident has been resolved.Oct 24, 19:23 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 24, 18:56 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 24, 19:23 UTC Monitoring - A fix has been implemented and we are monitoring the results.Oct 24, 18:56 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 24, 18:56 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 22:48 UTC Resolved - This incident has been resolved.Oct 23, 21:59 UTC Monitoring - A fix has been implemented and we are currently monitoring results.Oct 23, 21:32 UTC Identified - The issue has been identified and a fix is being implemented.Oct 23, 21:12 UTC Update - We are continuing to investigate this issue.Oct 23, 20:51 UTC Update - We are continuing to investigate this issue.Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 21:59 UTC Monitoring - A fix has been implemented and we are currently monitoring results.Oct 23, 21:32 UTC Identified - The issue has been identified and a fix is being implemented.Oct 23, 21:12 UTC Update - We are continuing to investigate this issue.Oct 23, 20:51 UTC Update - We are continuing to investigate this issue.Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 21:32 UTC Identified - The issue has been identified and a fix is being implemented.Oct 23, 21:12 UTC Update - We are continuing to investigate this issue.Oct 23, 20:51 UTC Update - We are continuing to investigate this issue.Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 21:12 UTC Update - We are continuing to investigate this issue.Oct 23, 20:51 UTC Update - We are continuing to investigate this issue.Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 20:51 UTC Update - We are continuing to investigate this issue.Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Oct 23, 20:43 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Drop in number of running workflows

Oct 22, 23:25 UTC Resolved - The incident has been resolved and running workflows are back to normal.Oct 22, 23:13 UTC Monitoring - A fix has been implemented and our engineers are monitoring the results.Oct 22, 23:06 UTC Identified - Our engineers have identified the issue and a fix is being implemented.Oct 22, 22:52 UTC Investigating - At 23:32 UTC we experienced a large drop in number of running workflows, we are currently investigating this issue.

Last Update: A few months ago

Drop in number of running workflows

Oct 22, 23:13 UTC Monitoring - A fix has been implemented and our engineers are monitoring the results.Oct 22, 23:06 UTC Identified - Our engineers have identified the issue and a fix is being implemented.Oct 22, 22:52 UTC Investigating - At 23:32 UTC we experienced a large drop in number of running workflows, we are currently investigating this issue.

Last Update: A few months ago

Drop in number of running workflows

Oct 22, 23:06 UTC Identified - Our engineers have identified the issue and a fix is being implemented.Oct 22, 22:52 UTC Investigating - At 23:32 UTC we experienced a large drop in number of running workflows, we are currently investigating this issue.

Last Update: A few months ago

Drop in number of running workflows

Oct 22, 22:52 UTC Investigating - At 23:32 UTC we experienced a large drop in number of running workflows, we are currently investigating this issue.

Last Update: A few months ago

New Onboarding & Pipelines Experiences inaccessible for some users

Oct 22, 20:27 UTC Resolved - This incident has been resolved. app.circleci.com, account.circleci.com, and onboarding.circleci.com are fully operational for all users.Oct 22, 20:16 UTC Monitoring - Our engineers have deployed a fix and are currently monitoring the results.Any UI loading errors encountered while viewing app.circleci.com, account.circleci.com, and onboarding.circleci.com are resolving successfully. Some users may need to refresh their screens to see pages load.Oct 22, 19:50 UTC Identified - The issue has been identified and we are working on a fix.

Last Update: A few months ago

New Onboarding & Pipelines Experiences inaccessible for some users

Oct 22, 20:16 UTC Monitoring - Our engineers have deployed a fix and are currently monitoring the results.Any UI loading errors encountered while viewing app.circleci.com, account.circleci.com, and onboarding.circleci.com are resolving successfully. Some users may need to refresh their screens to see pages load.Oct 22, 19:50 UTC Identified - The issue has been identified and we are working on a fix.

Last Update: A few months ago

New Onboarding & Pipelines Experiences inaccessible for some users

Oct 22, 20:16 UTC Monitoring - Our engineers have deployed a fix and are currently monitoring the results.Any UI loading errors encountered while viewing app.circleci.com, account.circleci.com, and onboarding.circleci.com are resolving successfully. Some users may need to refresh their screens to see pages load.Oct 22, 19:50 UTC Identified - The issue has been identified and we are working on a fix.

Last Update: A few months ago

New Onboarding & Pipelines Experiences inaccessible for some users

Oct 22, 19:50 UTC Identified - The issue has been identified and we are working on a fix.

Last Update: A few months ago

MacOS network maintenance

Oct 6, 14:41 UTC Completed - The scheduled maintenance has been completed.Oct 6, 14:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Oct 2, 17:16 UTC Scheduled - We will conduct network maintenance to expand the capacity of our MacOS network. No downtime is anticipated.

Last Update: A few months ago

MacOS network maintenance

Oct 6, 14:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Oct 2, 17:16 UTC Scheduled - We will conduct network maintenance to expand the capacity of our MacOS network. No downtime is anticipated.

Last Update: A few months ago

MacOS network maintenance

THIS IS A SCHEDULED EVENT Oct 6, 14:00 - 15:00 UTC Oct 2, 17:16 UTC Scheduled - We will conduct network maintenance to expand the capacity of our MacOS network. No downtime is anticipated.

Last Update: A few months ago

Login Errors

Oct 2, 15:44 UTC Resolved - Some users experienced issues logging in to CircleCI starting at 15:44 UTC , a fix was deployed at 16:09 UTC

Last Update: A few months ago

Upstream billing issues for some customers

Oct 1, 23:10 UTC Resolved - This incident has been resolved.Oct 1, 19:52 UTC Monitoring - We're experiencing upstream issues with our billing provider which may impact some customers. http://trust.zuora.com You may see a "no credits available" message in the UI, but you will still be able to continue running jobs. If you have any questions, our support team is available and ready to help: https://support.circleci.com/hc/en-us/requests/new?ticket_form_id=428787

Last Update: A few months ago

Upstream billing issues for some customers

Oct 1, 19:52 UTC Monitoring - We're experiencing upstream issues with our billing provider which may impact some customers. http://trust.zuora.com You may see a "no credits available" message in the UI, but you will still be able to continue running jobs. If you have any questions, our support team is available and ready to help: https://support.circleci.com/hc/en-us/requests/new?ticket_form_id=428787

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Sep 26, 18:59 UTC Resolved - This incident has been resolved. Machine & remote-docker builds are fully operational at this time.Sep 26, 18:46 UTC Monitoring - A fix has been implemented and machine & remote-docker builds are no longer queueing. We are currently monitoring the results.Sep 26, 18:34 UTC Update - Queue times have improved. We are continuing to investigate this issue.Sep 26, 18:08 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Sep 26, 18:46 UTC Monitoring - A fix has been implemented and machine & remote-docker builds are no longer queueing. We are currently monitoring the results.Sep 26, 18:34 UTC Update - Queue times have improved. We are continuing to investigate this issue.Sep 26, 18:08 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Sep 26, 18:34 UTC Update - Queue times have improved. We are continuing to investigate this issue.Sep 26, 18:08 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Sep 26, 18:08 UTC Investigating - Some machine and remote-docker builds are queueing. We are currently investigating this issue.

Last Update: A few months ago

Builds continually running until timeout

Sep 13, 14:30 UTC Resolved - On September 12th we began rolling out Secret Masking to accounts. We discovered that this blocked output in builds, causing the jobs to continually run until the max timeout was met. Secret Masking was enabled on 2% of builds. As of September 13th at 13:30 UTC , Secret Masking has been disabled until we have a fix for this issue.

Last Update: A few months ago

Step output is missing

Sep 9, 04:51 UTC Resolved - Step log output is currently working again, we became aware that log output was missing at UTC 03:00 and we will perform a full postmortem to determine the exact time window.Sep 9, 04:22 UTC Investigating - We are currently investigating a problem where log output from steps is not being displayed in the UI after a page refresh.

Last Update: A few months ago

Step output is missing

Sep 9, 04:22 UTC Investigating - We are currently investigating a problem where log output from steps is not being displayed in the UI after a page refresh.

Last Update: A few months ago

Delays in GitHub Checks and workflows

Aug 31, 14:37 UTC Resolved - This incident has been resolved.Aug 31, 14:34 UTC Update - We are continuing to monitor for any further issues.Aug 31, 14:25 UTC Monitoring - Workflows are running normally again. We are currently monitoring connectivity.Aug 31, 13:56 UTC Update - We're still investigating delaying with workflows. Next update in 20 minutes.Aug 31, 13:27 UTC Update - GitHub Checks have resumed normal operation.Aug 31, 13:13 UTC Investigating - We're investigating delays in GitHub Checks and workflows

Last Update: A few months ago

Delays in GitHub Checks and workflows

Aug 31, 14:34 UTC Update - We are continuing to monitor for any further issues.Aug 31, 14:25 UTC Monitoring - Workflows are running normally again. We are currently monitoring connectivity.Aug 31, 13:56 UTC Update - We're still investigating delaying with workflows. Next update in 20 minutes.Aug 31, 13:27 UTC Update - GitHub Checks have resumed normal operation.Aug 31, 13:13 UTC Investigating - We're investigating delays in GitHub Checks and workflows

Last Update: A few months ago

Delays in GitHub Checks and workflows

Aug 31, 14:25 UTC Monitoring - Workflows are running normally again. We are currently monitoring connectivity.Aug 31, 13:56 UTC Update - We're still investigating delaying with workflows. Next update in 20 minutes.Aug 31, 13:27 UTC Update - GitHub Checks have resumed normal operation.Aug 31, 13:13 UTC Investigating - We're investigating delays in GitHub Checks and workflows

Last Update: A few months ago

Delays in GitHub Checks and workflows

Aug 31, 13:56 UTC Update - We're still investigating delaying with workflows. Next update in 20 minutes.Aug 31, 13:27 UTC Update - GitHub Checks have resumed normal operation.Aug 31, 13:13 UTC Investigating - We're investigating delays in GitHub Checks and workflows

Last Update: A few months ago

Delays in GitHub Checks and workflows

Aug 31, 13:27 UTC Update - GitHub Checks have resumed normal operation.Aug 31, 13:13 UTC Investigating - We're investigating delays in GitHub Checks and workflows

Last Update: A few months ago

Delays in GitHub Checks and workflows

Aug 31, 13:13 UTC Investigating - We're investigating delays in GitHub Checks and workflows

Last Update: A few months ago

[Scheduled] Database Maintenance

Aug 28, 14:00 UTC Completed - The scheduled maintenance has been completed.Aug 28, 13:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 27, 23:17 UTC Scheduled - We've scheduled database maintenance for 1-2 PM UTC August 28, 2019. We will be working on our databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: A few months ago

[Scheduled] Database Maintenance

Aug 28, 13:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 27, 23:17 UTC Scheduled - We've scheduled database maintenance for 1-2 PM UTC August 28, 2019. We will be working on our databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: A few months ago

[Scheduled] Database Maintenance

THIS IS A SCHEDULED EVENT Aug 28, 13:00 - 14:00 UTC Aug 27, 23:17 UTC Scheduled - We've scheduled database maintenance for 1-2 PM UTC August 28, 2019. We will be working on our databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: A few months ago

Persistence of Workspace between Jobs Not Functioning

Aug 21, 15:03 UTC Resolved - On Wed Aug 21st at 15:03 UTC we found an issue with a recent deployment that impacted Persisting Workspace. Specifically, the persist_workspace and attach_to_workspace steps were not working. At 15:35 UTC we rolled back to a stable state and Persistence of Workspace is fully operational as of 15:35 UTC . Any workflow that failed will need to be run again from the beginning.

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 15:34 UTC Resolved - Machine executor jobs have resumed normal operationAug 19, 15:20 UTC Monitoring - A fix has been deployed and machine executor allocation times should be back to normalAug 19, 14:35 UTC Investigating - We are seeing another increase in apt errors which is now causing all VMs to fail to bootAug 19, 13:56 UTC Monitoring - We have scaled up to deal with the rate of errorsAug 19, 13:29 UTC Identified - The issue has been identified and a fix is being implemented.Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 15:20 UTC Monitoring - A fix has been deployed and machine executor allocation times should be back to normalAug 19, 14:35 UTC Investigating - We are seeing another increase in apt errors which is now causing all VMs to fail to bootAug 19, 13:56 UTC Monitoring - We have scaled up to deal with the rate of errorsAug 19, 13:29 UTC Identified - The issue has been identified and a fix is being implemented.Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 14:35 UTC Investigating - We are seeing another increase in apt errors which is now causing all VMs to fail to bootAug 19, 13:56 UTC Monitoring - We have scaled up to deal with the rate of errorsAug 19, 13:29 UTC Identified - The issue has been identified and a fix is being implemented.Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 14:35 UTC Investigating - We are seeing another increase in apt errors which is now causing all VMs to fail to bootAug 19, 13:56 UTC Monitoring - We have scaled up to deal with the rate of errorsAug 19, 13:29 UTC Identified - The issue has been identified and a fix is being implemented.Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 13:56 UTC Monitoring - We have scaled up to deal with the rate of errorsAug 19, 13:29 UTC Identified - The issue has been identified and a fix is being implemented.Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 13:29 UTC Identified - The issue has been identified and a fix is being implemented.Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Increased wait times for machine executor jobs

Aug 19, 13:28 UTC Investigating - Due to an increase in failures connecting to the apt repositories, we are seeing increased wait times for Machine builds

Last Update: A few months ago

Reads and Writes of Build Output Logs are Currently Failing

Aug 14, 19:13 UTC Resolved - Build step has returned to normal, all systems are operational.Aug 14, 19:09 UTC Update - Build step has returned to normal, all systems are operational.Aug 14, 19:02 UTC Monitoring - We've identified the issue and scaled up to meet demand.Aug 14, 18:53 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Reads and Writes of Build Output Logs are Currently Failing

Aug 14, 19:09 UTC Update - Build step has returned to normal, all systems are operational.Aug 14, 19:02 UTC Monitoring - We've identified the issue and scaled up to meet demand.Aug 14, 18:53 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Reads and Writes of Build Output Logs are Currently Failing

Aug 14, 19:02 UTC Monitoring - We've identified the issue and scaled up to meet demand.Aug 14, 18:53 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Reads and Writes of Build Output Logs are Currently Failing

Aug 14, 18:53 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

web authentication temporarily unavailable

Aug 13, 14:45 UTC Resolved - Authentication to the CirlceCI app was unavailable for a period of 10 minutes

Last Update: A few months ago

web authentication temporarily unavailable

Aug 13, 16:45 UTC Resolved - Authentication to the CirlceCI app was unavailable for a period of 10 minutes

Last Update: A few months ago

Small number of builds are queued

Jul 30, 19:55 UTC Resolved - This incident has been resolved. The backlog has been cleared and stabilized at normal levels.Jul 30, 19:37 UTC Monitoring - A fix has been implemented. We have now cleared the backlog of stuck jobs and we are currently monitoring the results.Jul 30, 19:30 UTC Investigating - A small number of builds have become stuck and queued for a long time. We will continue to update while investigating this.

Last Update: A few months ago

Small number of builds are queued

Jul 30, 19:37 UTC Monitoring - A fix has been implemented. We have now cleared the backlog of stuck jobs and we are currently monitoring the results.Jul 30, 19:30 UTC Investigating - A small number of builds have become stuck and queued for a long time. We will continue to update while investigating this.

Last Update: A few months ago

Small number of builds are queued

Jul 30, 19:30 UTC Investigating - A small number of builds have become stuck and queued for a long time. We will continue to update while investigating this.

Last Update: A few months ago

Machine and Remote Docker Provisioning Delays

Jul 18, 21:00 UTC Resolved - Some machine and Remote docker jobs were delayed in starting due to provisioning errors

Last Update: A few months ago

Failure to run macOS executor jobs

Jul 12, 09:16 UTC Resolved - MacOS provisioning times are back to normal.Jul 12, 08:57 UTC Monitoring - The issue has been identified. MacOS provisioning times have returned to normal and we will continue to monitor the macOS environment.Jul 12, 08:23 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Failure to run macOS executor jobs

Jul 12, 08:57 UTC Monitoring - The issue has been identified. MacOS provisioning times have returned to normal and we will continue to monitor the macOS environment.Jul 12, 08:23 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Failure to run macOS executor jobs

Jul 12, 08:23 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Failure to start workflows

Jul 4, 09:24 UTC Resolved - This incident has been resolved.

Last Update: A few months ago

Beta UI for Jobs page unavailable.

Jun 26, 17:38 UTC Resolved - We have rolled back a recent change made to the beta UI. Users who were affected by this should now be able to properly load all Job pages on the beta UI.Jun 26, 17:33 UTC Investigating - A subset of users who have opted into using our beta UI may not be able to access the jobs page currently. We are working to resolve this issue and will provide an update shortly.

Last Update: A few months ago

Beta UI for Jobs page unavailable.

Jun 26, 17:33 UTC Investigating - A subset of users who have opted into using our beta UI may not be able to access the jobs page currently. We are working to resolve this issue and will provide an update shortly.

Last Update: A few months ago

Increased API Errors & Webhook Failures

Jun 17, 19:02 UTC Resolved - Between 1834 UTC and 1846 UTC , we observed an increase in error rates in our API. This resulted in delays to workflow start times. A small number of webhook-triggered workflows also failed to start. Our systems have recovered and we are investigating potential causes.

Last Update: A few months ago

Database Maintenance

Jun 1, 21:00 UTC Completed - The scheduled maintenance has been completed.Jun 1, 20:35 UTC Verifying - We have completed the necessary maintenance and are now monitoring. We will continue to maintain capacity to work thru the backlog.Jun 1, 20:25 UTC Update - Workflows processing has resumed and we have scaled up all of our infrastructure to manage the backlog of jobs.Jun 1, 19:20 UTC Update - All macOS, Machine Executor and Remote Docker jobs will not be starting during this period of maintenance.Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 20:35 UTC Verifying - We have completed the necessary maintenance and are now monitoring. We will continue to maintain capacity to work thru the backlog.Jun 1, 20:25 UTC Update - Workflows processing has resumed and we have scaled up all of our infrastructure to manage the backlog of jobs.Jun 1, 19:20 UTC Update - All macOS, Machine Executor and Remote Docker jobs will not be starting during this period of maintenance.Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 20:35 UTC Verifying - We have completed the necessary maintenance and are now monitoring. We will continue to maintain capacity to work thru the backlog.Jun 1, 20:25 UTC Update - Workflows processing has resumed and we have scaled up all of our infrastructure to manage the backlog of jobs.Jun 1, 19:20 UTC Update - All macOS, Machine Executor and Remote Docker jobs will not be starting during this period of maintenance.Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 20:25 UTC Update - Workflows processing has resumed and we have scaled up all of our infrastructure to manage the backlog of jobs.Jun 1, 19:20 UTC Update - All macOS, Machine Executor and Remote Docker jobs will not be starting during this period of maintenance.Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 20:25 UTC Update - Workflows processing has resumed and we have scaled up all of our infrastructure to manage the backlog of jobs.Jun 1, 19:20 UTC Update - All macOS, Machine Executor and Remote Docker jobs will not be starting during this period of maintenance.Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 19:20 UTC Update - All macOS, Machine Executor and Remote Docker jobs will not be starting during this period of maintenance.Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 19:09 UTC Update - Incoming web hooks are now being processed but Workflow starts will be delayed until the completion of the scheduled maintenance.Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 18:38 UTC Update - Workflows maintenance is about to start, this will impact all Workflow starts and incoming webhooks should be queued but we anticipate many to be droppedJun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Database Maintenance

Jun 1, 17:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Billing credit refills may cause Workflows to not start

May 31, 22:48 UTC Resolved - We have corrected the billing issues with impacted customers, if you think you may have been impacted or are still impacted, please reach out to SupportMay 31, 22:38 UTC Monitoring - All impacted customers have been fixed, we are monitoring to ensure no remaining customers are impactedMay 31, 22:14 UTC Update - We are identifying the changes necessary to fully implement our fixMay 31, 21:55 UTC Identified - We have discovered the issue that is causing some customers to have workflows stop after a credit refill

Last Update: A few months ago

Billing credit refills may cause Workflows to not start

May 31, 22:38 UTC Monitoring - All impacted customers have been fixed, we are monitoring to ensure no remaining customers are impactedMay 31, 22:14 UTC Update - We are identifying the changes necessary to fully implement our fixMay 31, 21:55 UTC Identified - We have discovered the issue that is causing some customers to have workflows stop after a credit refill

Last Update: A few months ago

Billing credit refills may cause Workflows to not start

May 31, 22:14 UTC Update - We are identifying the changes necessary to fully implement our fixMay 31, 21:55 UTC Identified - We have discovered the issue that is causing some customers to have workflows stop after a credit refill

Last Update: A few months ago

Billing credit refills may cause Workflows to not start

May 31, 21:55 UTC Identified - We have discovered the issue that is causing some customers to have workflows stop after a credit refill

Last Update: A few months ago

Machine Executor VMs

May 30, 01:00 UTC Resolved - We experienced a delay in handling of Machine Executor VM requests

Last Update: A few months ago

Database Maintenance

THIS IS A SCHEDULED EVENT Jun 1, 17:00 - 21:00 UTC May 24, 21:04 UTC Scheduled - CircleCI will be performing database maintenance on Saturday, June 1, 2019, between the hours of 17:00 UTC and 21:00 UTC . During this time, we expect that there will be periods when jobs do not run, parts of the user interface are inaccessible, and webhooks may be dropped. If you initiate jobs during this period that do not run or do not finish, please retry them when the maintenance period is complete. While this is generally a low-volume build time, we realize that this may be an inconvenience for customers and apologize in advance for any disruption to your work.

Last Update: A few months ago

Remote Docker provisioning delays

May 13, 16:12 UTC Resolved - We are no longer seeing any delay in VM provisioning and have considered this incident resolved.May 13, 16:00 UTC Monitoring - Our pre-scaling of images has worked to reduce the backlog and we are monitoring the situationMay 13, 15:50 UTC Update - Our pre-scaling of VMs is working to reduce the backlogMay 13, 15:38 UTC Update - We are continuing to scale up our VM services in additional zones to reduce delays.May 13, 15:26 UTC Update - We have begun scaling up our VM services in additional zones to reduce the current delays.May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Remote Docker provisioning delays

May 13, 16:00 UTC Monitoring - Our pre-scaling of images has worked to reduce the backlog and we are monitoring the situationMay 13, 15:50 UTC Update - Our pre-scaling of VMs is working to reduce the backlogMay 13, 15:38 UTC Update - We are continuing to scale up our VM services in additional zones to reduce delays.May 13, 15:26 UTC Update - We have begun scaling up our VM services in additional zones to reduce the current delays.May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Remote Docker provisioning delays

May 13, 15:50 UTC Update - Our pre-scaling of VMs is working to reduce the backlogMay 13, 15:38 UTC Update - We are continuing to scale up our VM services in additional zones to reduce delays.May 13, 15:26 UTC Update - We have begun scaling up our VM services in additional zones to reduce the current delays.May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Remote Docker provisioning delays

May 13, 15:38 UTC Update - We are continuing to scale up our VM services in additional zones to reduce delays.May 13, 15:26 UTC Update - We have begun scaling up our VM services in additional zones to reduce the current delays.May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Remote Docker provisioning delays

May 13, 15:26 UTC Update - We have begun scaling up our VM services in additional zones to reduce the current delays.May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Machine executor provision delays

May 13, 15:26 UTC Update - We have begun scaling up our VM services in additional zones to reduce the current delays.May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Machine executor provision delays

May 13, 15:10 UTC Identified - We are experiencing delays provisioning machine executor jobs with our VM cloud partner. This may affect the Machine executor and the Remote Docker service. We are monitoring capacity and will provide updates when possible.

Last Update: A few months ago

Docker Hub Planned Maintenance

May 5, 02:16 UTC Completed - The scheduled maintenance has been completed.May 4, 16:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 3, 15:41 UTC Scheduled - Docker Hub will be performing scheduled maintenance this Saturday May 4, 2019 from approximately 9:00AM to 7:15PM, US Pacific Daylight Time (UTC -7). During this window, Docker Hub will be operating in a read-only mode. Registry logins and image pulls will continue to work for a majority of this time frame. Pushes however will generally be unavailable. Maintenance activities, approximate timelines and FAQ can be found in their knowledge base article, which will be updated throughout the maintenance window.Updates can be found at status.docker.com

Last Update: A few months ago

Docker Hub Planned Maintenance

May 4, 16:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 3, 15:41 UTC Scheduled - Docker Hub will be performing scheduled maintenance this Saturday May 4, 2019 from approximately 9:00AM to 7:15PM, US Pacific Daylight Time (UTC -7). During this window, Docker Hub will be operating in a read-only mode. Registry logins and image pulls will continue to work for a majority of this time frame. Pushes however will generally be unavailable. Maintenance activities, approximate timelines and FAQ can be found in their knowledge base article, which will be updated throughout the maintenance window.Updates can be found at status.docker.com

Last Update: A few months ago

Docker Hub Planned Maintenance

THIS IS A SCHEDULED EVENT May 4, 16:00 UTC - May 5, 02:15 UTC May 3, 15:41 UTC Scheduled - Docker Hub will be performing scheduled maintenance this Saturday May 4, 2019 from approximately 9:00AM to 7:15PM, US Pacific Daylight Time (UTC -7). During this window, Docker Hub will be operating in a read-only mode. Registry logins and image pulls will continue to work for a majority of this time frame. Pushes however will generally be unavailable. Maintenance activities, approximate timelines and FAQ can be found in their knowledge base article, which will be updated throughout the maintenance window.Updates can be found at status.docker.com

Last Update: A few months ago

UI Unavailable

Apr 25, 14:40 UTC Resolved - This issue is resolvedApr 25, 14:19 UTC Monitoring - The CircleCI UI was unavailable from 14:10 to 14:14 UTC . The problem has already been remedied. We are monitoring our platform for stability and performance, and will post an update in 20 minutes.

Last Update: A few months ago

UI Unavailable

Apr 25, 14:19 UTC Monitoring - The CircleCI UI was unavailable from 14:10 to 14:14 UTC . The problem has already been remedied. We are monitoring our platform for stability and performance, and will post an update in 20 minutes.

Last Update: A few months ago

Webhook delivery disruption

Apr 19, 12:00 UTC Resolved - The outbound proxy we use as part of our notification service failed and was restarted. The monitors for that service failed to alert us and we are looking into why. We have restarted the service. All webhook delivery issues should be resolved as of 17:40 UTC

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 21:25 UTC Resolved - We are continuing to work with our cloud partner to investigate the root cause of the incident.All VM services are now fully operational.Apr 10, 21:04 UTC Monitoring - Our cloud partner has identified and resolved the issue within their service.We are actively monitoring the situation.Apr 10, 20:50 UTC Update - Our cloud partner has identified and resolved the issue within their service. We are continuing to process a backlog log jobs delayed by this issue.Apr 10, 20:30 UTC Update - We are currently processing a backlog of jobs currently awaiting allocation on the Machine Executor and for Remote Docker jobs.We will provide more information shortly.Apr 10, 20:12 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 21:04 UTC Monitoring - Our cloud partner has identified and resolved the issue within their service.We are actively monitoring the situation.Apr 10, 20:50 UTC Update - Our cloud partner has identified and resolved the issue within their service. We are continuing to process a backlog log jobs delayed by this issue.Apr 10, 20:30 UTC Update - We are currently processing a backlog of jobs currently awaiting allocation on the Machine Executor and for Remote Docker jobs.We will provide more information shortly.Apr 10, 20:12 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 21:04 UTC Monitoring - Our cloud partner has identified and resolved the issue within their service.We are actively monitoring the situation.Apr 10, 20:50 UTC Update - Our cloud partner has identified and resolved the issue within their service. We are continuing to process a backlog log jobs delayed by this issue.Apr 10, 20:30 UTC Update - We are currently processing a backlog of jobs currently awaiting allocation on the Machine Executor and for Remote Docker jobs.We will provide more information shortly.Apr 10, 20:12 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 20:50 UTC Update - Our cloud partner has identified and resolved the issue within their service. We are continuing to process a backlog log jobs delayed by this issue.Apr 10, 20:30 UTC Update - We are currently processing a backlog of jobs currently awaiting allocation on the Machine Executor and for Remote Docker jobs.We will provide more information shortly.Apr 10, 20:12 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 20:30 UTC Update - We are currently processing a backlog of jobs currently awaiting allocation on the Machine Executor and for Remote Docker jobs.We will provide more information shortly.Apr 10, 20:12 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 20:12 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 19:52 UTC Update - We're continuing to work through this issue with our upstream cloud provider, and will continue to provide updates as soon as they are available.Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 19:29 UTC Update - We're continuing to work closely with our cloud provider on this issue, and will provide an update as soon as one is available.Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 19:08 UTC Update - Our support ticket with our cloud provider has now been escalated to P0. We continue to work fervently toward a resolution.Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 18:55 UTC Update - We have a P1 ticket open with our cloud providers and are working on a resolution.Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 18:31 UTC Update - We have identified the cause of this issue and are working on a resolution.Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 18:08 UTC Identified - We have identified the cause of delayed Machine executor and Remote Docker services.Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delays for Machine Executor and Remote Docker

Apr 10, 17:45 UTC Investigating - We are currently investigating delays affecting users utilizing the Machine executor and Remote Docker services.We will provide updates shortly.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 16:35 UTC Resolved - This incident is now resolved.Apr 10, 16:14 UTC Update - Delay times are improving and we are continuing to monitor the situation.Apr 10, 15:52 UTC Monitoring - We have completed our database repairs and delay times are improving.Workflows are no longer experiencing any increased delays. We are continuing to monitor the situation.Apr 10, 15:39 UTC Update - We have completed our database repairs and delay times are improving.Workflows are expected to be delayed for an average of 5 minutes.Apr 10, 15:18 UTC Update - We have increased our ability to process workflows and are scaling to meet demand.Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:49 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 20 minutes.Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 16:14 UTC Update - Delay times are improving and we are continuing to monitor the situation.Apr 10, 15:52 UTC Monitoring - We have completed our database repairs and delay times are improving.Workflows are no longer experiencing any increased delays. We are continuing to monitor the situation.Apr 10, 15:39 UTC Update - We have completed our database repairs and delay times are improving.Workflows are expected to be delayed for an average of 5 minutes.Apr 10, 15:18 UTC Update - We have increased our ability to process workflows and are scaling to meet demand.Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:49 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 20 minutes.Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 15:52 UTC Monitoring - We have completed our database repairs and delay times are improving.Workflows are no longer experiencing any increased delays. We are continuing to monitor the situation.Apr 10, 15:39 UTC Update - We have completed our database repairs and delay times are improving.Workflows are expected to be delayed for an average of 5 minutes.Apr 10, 15:18 UTC Update - We have increased our ability to process workflows and are scaling to meet demand.Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:49 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 20 minutes.Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 15:39 UTC Update - We have completed our database repairs and delay times are improving.Workflows are expected to be delayed for an average of 5 minutes.Apr 10, 15:18 UTC Update - We have increased our ability to process workflows and are scaling to meet demand.Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:49 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 20 minutes.Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 15:18 UTC Update - We have increased our ability to process workflows and are scaling to meet demand.Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:49 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 20 minutes.Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 14:49 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 20 minutes.Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 14:28 UTC Update - We are currently performing underlying database maintenance. Workflows are expected to be delayed for an average time of around 18 minutes.Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 14:07 UTC Update - We are currently investigating longer than average queue times before workflows begin.Notifications and test results have been re-enabled.We will continue to provide updates.Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 13:47 UTC Update - We are currently investigating longer than average queue times before workflows begin.We are now running diagnostics. You may experience brief disruptions in notifications and test results. Notifications and test results will be re-enabled momentarilyWe will continue to provide updates.Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 13:25 UTC Investigating - We are currently investigating longer than average queue times before workflows begin.We will update with more information as soon as possible.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 09:49 UTC Resolved - We are no longer seeing any delay in Workflows.Apr 10, 09:20 UTC Investigating - We are investigating delays of around 3 minutes starting Workflows.

Last Update: A few months ago

Delay in starting Workflows

Apr 10, 09:20 UTC Investigating - We are investigating delays of around 3 minutes starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 6, 01:11 UTC Resolved - We are now back to normal Workflow timingsApr 6, 00:58 UTC Monitoring - We are no longer seeing any delay in Workflow starts and will monitor for 20 minutesApr 6, 00:21 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 3 minutes in starting jobs is expected currently.Apr 6, 00:00 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 2 minutes in starting jobs is expected currently.Apr 5, 23:28 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 23:05 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 6, 00:58 UTC Monitoring - We are no longer seeing any delay in Workflow starts and will monitor for 20 minutesApr 6, 00:21 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 3 minutes in starting jobs is expected currently.Apr 6, 00:00 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 2 minutes in starting jobs is expected currently.Apr 5, 23:28 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 23:05 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 6, 00:21 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 3 minutes in starting jobs is expected currently.Apr 6, 00:00 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 2 minutes in starting jobs is expected currently.Apr 5, 23:28 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 23:05 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 6, 00:00 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 2 minutes in starting jobs is expected currently.Apr 5, 23:28 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 23:05 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 23:28 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 23:05 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 23:05 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 6 minutes in starting jobs is expected currently.Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 22:44 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of up to 5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 22:27 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 22:03 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 2 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 21:33 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 3 minutes 20 seconds in starting jobs is expected currently. We will provide more information shortly.Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 21:08 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 6.5 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 20:47 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 20:19 UTC Update - We are investigating delays in starting Workflows.We are currently investigating slower than average response times from our underlying data store.We will provide more information shortly.Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 19:58 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 19:46 UTC Update - We are investigating delays in starting Workflows. We are currently investigating slower than average response times from our underlaying data store. An average delay of 10 minutes in starting jobs is expected currently. We will provide more information shortly.Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 19:20 UTC Update - We are investigating delays in starting Workflows.Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 18:58 UTC Update - We are continuing to investigate delayed Workflows.Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows are slow to start

Apr 5, 18:36 UTC Investigating - We are investigating delays in starting Workflows.

Last Update: A few months ago

Workflows UI failed to render

Apr 5, 11:04 UTC Resolved - Between 11:04 and 11:20 UTC the workflows UI failed to render due to a bad deploy. The deploy was rolled back, processing of workflows was not impacted.

Last Update: A few months ago

Delay in starting Workflows

Apr 5, 00:54 UTC Resolved - We are no longer seeing any delay in Job startsApr 5, 00:25 UTC Update - We are no longer seeing any delay in starting Jobs and our latency is below 10 secondsApr 5, 00:15 UTC Update - Job start delay is currently at one minute and we are continuing to monitorApr 4, 23:56 UTC Monitoring - Job start delay is currently at 1.25 minutes and is continuing to improveApr 4, 23:34 UTC Update - Job start delay is currently at 3 minutes and improvingApr 4, 23:28 UTC Update - There is currently a 6 minute delay in starting Jobs and it is improvingApr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 5, 00:25 UTC Update - We are no longer seeing any delay in starting Jobs and our latency is below 10 secondsApr 5, 00:15 UTC Update - Job start delay is currently at one minute and we are continuing to monitorApr 4, 23:56 UTC Monitoring - Job start delay is currently at 1.25 minutes and is continuing to improveApr 4, 23:34 UTC Update - Job start delay is currently at 3 minutes and improvingApr 4, 23:28 UTC Update - There is currently a 6 minute delay in starting Jobs and it is improvingApr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 5, 00:15 UTC Update - Job start delay is currently at one minute and we are continuing to monitorApr 4, 23:56 UTC Monitoring - Job start delay is currently at 1.25 minutes and is continuing to improveApr 4, 23:34 UTC Update - Job start delay is currently at 3 minutes and improvingApr 4, 23:28 UTC Update - There is currently a 6 minute delay in starting Jobs and it is improvingApr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 23:56 UTC Monitoring - Job start delay is currently at 1.25 minutes and is continuing to improveApr 4, 23:34 UTC Update - Job start delay is currently at 3 minutes and improvingApr 4, 23:28 UTC Update - There is currently a 6 minute delay in starting Jobs and it is improvingApr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 23:34 UTC Update - Job start delay is currently at 3 minutes and improvingApr 4, 23:28 UTC Update - There is currently a 6 minute delay in starting Jobs and it is improvingApr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 23:28 UTC Update - There is currently a 6 minute delay in starting Jobs and it is improvingApr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 23:22 UTC Update - We are continuing to process our backlog of WorkflowsApr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 23:06 UTC Identified - We have found a datastore with high latency that was causing Workflows to be delayed and are monitoring it's restartApr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 22:47 UTC Update - We are continuing to look into why Workflows are slow to startApr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Delay in starting Workflows

Apr 4, 22:30 UTC Investigating - We are looking into why Workflows are slow to start

Last Update: A few months ago

Jobs may be slow to run

Apr 3, 19:27 UTC Resolved - Job start times and error rates have returned to normal.Apr 3, 19:07 UTC Monitoring - We're observing a recovery in job start times and a decrease in errors. We are continuing to monitor the situation.Apr 3, 18:57 UTC Investigating - We have observed a slow-down in job times and an increase in errors in our UI. We have begun an investigation and are scaling up to meet demand.

Last Update: A few months ago

Jobs may be slow to run

Apr 3, 19:07 UTC Monitoring - We're observing a recovery in job start times and a decrease in errors. We are continuing to monitor the situation.Apr 3, 18:57 UTC Investigating - We have observed a slow-down in job times and an increase in errors in our UI. We have begun an investigation and are scaling up to meet demand.

Last Update: A few months ago

Jobs may be slow to run

Apr 3, 18:57 UTC Investigating - We have observed a slow-down in job times and an increase in errors in our UI. We have begun an investigation and are scaling up to meet demand.

Last Update: A few months ago

Jobs start times delayed

Apr 3, 00:58 UTC Resolved - This incident has been resolved.Apr 3, 00:50 UTC Update - We believe this incident to be resolved however we are continuing to monitor for changes to the recovery of our infrastructure.Apr 3, 00:26 UTC Update - We're continuing to monitor the recovery of our infrastructure.Apr 2, 23:59 UTC Monitoring - We've scaled up our infrastructure to deal with excess demand and are working on processing any remaining queue.Apr 2, 23:30 UTC Update - We are working to process the backlog and continuing to attempt fixes to this incident.Apr 2, 23:09 UTC Identified - We’ve identified the issue as network instability and are working on a resolution.Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Jobs start times delayed

Apr 3, 00:50 UTC Update - We believe this incident to be resolved however we are continuing to monitor for changes to the recovery of our infrastructure.Apr 3, 00:26 UTC Update - We're continuing to monitor the recovery of our infrastructure.Apr 2, 23:59 UTC Monitoring - We've scaled up our infrastructure to deal with excess demand and are working on processing any remaining queue.Apr 2, 23:30 UTC Update - We are working to process the backlog and continuing to attempt fixes to this incident.Apr 2, 23:09 UTC Identified - We’ve identified the issue as network instability and are working on a resolution.Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Jobs start times delayed

Apr 3, 00:26 UTC Update - We're continuing to monitor the recovery of our infrastructure.Apr 2, 23:59 UTC Monitoring - We've scaled up our infrastructure to deal with excess demand and are working on processing any remaining queue.Apr 2, 23:30 UTC Update - We are working to process the backlog and continuing to attempt fixes to this incident.Apr 2, 23:09 UTC Identified - We’ve identified the issue as network instability and are working on a resolution.Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Jobs start times delayed

Apr 2, 23:59 UTC Monitoring - We've scaled up our infrastructure to deal with excess demand and are working on processing any remaining queue.Apr 2, 23:30 UTC Update - We are working to process the backlog and continuing to attempt fixes to this incident.Apr 2, 23:09 UTC Identified - We’ve identified the issue as network instability and are working on a resolution.Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Jobs start times delayed

Apr 2, 23:30 UTC Update - We are working to process the backlog and continuing to attempt fixes to this incident.Apr 2, 23:09 UTC Identified - We’ve identified the issue as network instability and are working on a resolution.Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Jobs start times delayed

Apr 2, 23:09 UTC Identified - We’ve identified the issue as network instability and are working on a resolution.Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Jobs start times delayed

Apr 2, 22:52 UTC Investigating - We are investigating an issue causing jobs starts to be delayed.

Last Update: A few months ago

Workflows UI Unavailable

Apr 1, 18:45 UTC Resolved - The Workflows UI was temporarily degraded and lead to an inability to see particular workflows or lists of workflows.

Last Update: A few months ago

Network interruption

Apr 1, 14:25 UTC Resolved - We were briefly serving errors due to a network interruption within our infrastructure. Service has recovered and we are investigating possible causes.

Last Update: A few months ago

Delayed Workflows

Mar 27, 16:04 UTC Resolved - Workflows and notifications are back to operational.Mar 27, 16:00 UTC Update - Workflows processing time has recovered. Notifications are delayed. We will continue monitoring.Mar 27, 15:49 UTC Monitoring - There was a delay with Workflows and they are starting to recover. We will continue monitoring.

Last Update: A few months ago

Delayed Workflows

Mar 27, 16:00 UTC Update - Workflows processing time has recovered. Notifications are delayed. We will continue monitoring.Mar 27, 15:49 UTC Monitoring - There was a delay with Workflows and they are starting to recover. We will continue monitoring.

Last Update: A few months ago

Delayed Workflows

Mar 27, 15:49 UTC Monitoring - There was a delay with Workflows and they are starting to recover. We will continue monitoring.

Last Update: A few months ago

CircleCI Mongo Maintenance

Mar 27, 08:27 UTC Completed - The scheduled maintenance has been completed.Mar 27, 05:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 27, 03:45 UTC Scheduled - We will be upgrading underlying infrastructure for some of our Mongo Clusters. While we do not expect this event to last an hour, we want to ensure that we have adequate time in case of any issues.

Last Update: A few months ago

CircleCI Mongo Maintenance

Mar 27, 05:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 27, 03:45 UTC Scheduled - We will be upgrading underlying infrastructure for some of our Mongo Clusters. While we do not expect this event to last an hour, we want to ensure that we have adequate time in case of any issues.

Last Update: A few months ago

CircleCI Mongo Maintenance

Mar 27, 05:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 27, 03:45 UTC Scheduled - We will be upgrading underlying infrastructure for some of our Mongo Clusters. While we do not expect this event to last an hour, we want to ensure that we have adequate time in case of any issues.

Last Update: A few months ago

CircleCI Mongo Maintenance

THIS IS A SCHEDULED EVENT Mar 27, 05:00 - 06:00 UTC Mar 27, 03:45 UTC Scheduled - We will be upgrading underlying infrastructure for some of our Mongo Clusters. While we do not expect this event to last an hour, we want to ensure that we have adequate time in case of any issues.

Last Update: A few months ago

Workflows Delays

Mar 27, 01:58 UTC Resolved - Workflow start times are now operational. The incident is now resolved.Mar 27, 01:31 UTC Monitoring - The issue has been identified. Workflow start times have returned to normal and we will continue to monitor the Workflows environment.Mar 27, 01:18 UTC Update - We are continuing to investigate this issue.Mar 27, 00:56 UTC Update - We are continuing to investigate Workflow delays.Mar 27, 00:34 UTC Identified - Workflow start times are delayed and we are continuing to investigate this issue.Mar 27, 00:04 UTC Monitoring - Workflow start times have recovered to a normal state and we are moving this incident to the monitoring status. Jobs are starting automatically.Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 27, 01:31 UTC Monitoring - The issue has been identified. Workflow start times have returned to normal and we will continue to monitor the Workflows environment.Mar 27, 01:18 UTC Update - We are continuing to investigate this issue.Mar 27, 00:56 UTC Update - We are continuing to investigate Workflow delays.Mar 27, 00:34 UTC Identified - Workflow start times are delayed and we are continuing to investigate this issue.Mar 27, 00:04 UTC Monitoring - Workflow start times have recovered to a normal state and we are moving this incident to the monitoring status. Jobs are starting automatically.Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 27, 01:18 UTC Update - We are continuing to investigate this issue.Mar 27, 00:56 UTC Update - We are continuing to investigate Workflow delays.Mar 27, 00:34 UTC Identified - Workflow start times are delayed and we are continuing to investigate this issue.Mar 27, 00:04 UTC Monitoring - Workflow start times have recovered to a normal state and we are moving this incident to the monitoring status. Jobs are starting automatically.Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 27, 00:56 UTC Update - We are continuing to investigate Workflow delays.Mar 27, 00:34 UTC Identified - Workflow start times are delayed and we are continuing to investigate this issue.Mar 27, 00:04 UTC Monitoring - Workflow start times have recovered to a normal state and we are moving this incident to the monitoring status. Jobs are starting automatically.Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 27, 00:34 UTC Identified - Workflow start times are delayed and we are continuing to investigate this issue.Mar 27, 00:04 UTC Monitoring - Workflow start times have recovered to a normal state and we are moving this incident to the monitoring status. Jobs are starting automatically.Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 27, 00:04 UTC Monitoring - Workflow start times have recovered to a normal state and we are moving this incident to the monitoring status. Jobs are starting automatically.Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 26, 23:44 UTC Update - We are continuing to investigate the on-going delays with starting workflows and jobs. We are adjusting load handling throughout the system in order to maximize throughput while we work on identifying the root cause. Current delays to start both workflows and jobs is approximately 3 minutes.Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 26, 23:01 UTC Update - We are continuing to investigate this issue.Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 26, 22:39 UTC Update - We are continuing to investigate Workflows delays.Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 26, 22:10 UTC Update - We are continuing to investigate this issue.Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 26, 21:51 UTC Update - We are continuing to investigate the cause of Workflows delays.Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Workflows Delays

Mar 26, 21:29 UTC Investigating - We're investigating an issue causing workflows to be delayed and will provide more information shortly.

Last Update: A few months ago

Slow Workflow processing

Mar 26, 18:29 UTC Resolved - Workflow processing timing has returned to normal.Mar 26, 18:14 UTC Monitoring - We are no longer seeing a slowdown in Workflows. We will be monitoring the situation.Mar 26, 18:07 UTC Investigating - We are seeing a slowdown of Workflow processing

Last Update: A few months ago

Slow Workflow processing

Mar 26, 18:14 UTC Monitoring - We are no longer seeing a slowdown in Workflows. We will be monitoring the situation.Mar 26, 18:07 UTC Investigating - We are seeing a slowdown of Workflow processing

Last Update: A few months ago

Slow Workflow processing

Mar 26, 18:07 UTC Investigating - We are seeing a slowdown of Workflow processing

Last Update: A few months ago

Intermittent server unavailability errors from our API

Mar 16, 22:00 UTC Resolved - Our API servers were intermittently responding with server unavailable errors due to an unusual load pattern causing OOM issues.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 19:17 UTC Resolved - VM provisioning times are stable. The incident is resolved.Mar 21, 18:58 UTC Monitoring - VM provisioning times have returned to normal. We will continue to monitor our systems for any lingering delays.Mar 21, 18:30 UTC Update - MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 18:09 UTC Update - Provisioning times for most VM types have returned to normal. MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 17:47 UTC Update - VM provisioning times are gradually returning to normal. We're continuing to process backlogged requests.Mar 21, 16:53 UTC Identified - We identified and resolved a rate limiting issue and are now scaling up to process the backlog of queued requests. An update will follow shortly.Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 18:58 UTC Monitoring - VM provisioning times have returned to normal. We will continue to monitor our systems for any lingering delays.Mar 21, 18:30 UTC Update - MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 18:09 UTC Update - Provisioning times for most VM types have returned to normal. MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 17:47 UTC Update - VM provisioning times are gradually returning to normal. We're continuing to process backlogged requests.Mar 21, 16:53 UTC Identified - We identified and resolved a rate limiting issue and are now scaling up to process the backlog of queued requests. An update will follow shortly.Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 18:30 UTC Update - MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 18:09 UTC Update - Provisioning times for most VM types have returned to normal. MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 17:47 UTC Update - VM provisioning times are gradually returning to normal. We're continuing to process backlogged requests.Mar 21, 16:53 UTC Identified - We identified and resolved a rate limiting issue and are now scaling up to process the backlog of queued requests. An update will follow shortly.Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 18:09 UTC Update - Provisioning times for most VM types have returned to normal. MacOS provisioning times are still slightly elevated. We will provide an update soon.Mar 21, 17:47 UTC Update - VM provisioning times are gradually returning to normal. We're continuing to process backlogged requests.Mar 21, 16:53 UTC Identified - We identified and resolved a rate limiting issue and are now scaling up to process the backlog of queued requests. An update will follow shortly.Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 17:47 UTC Update - VM provisioning times are gradually returning to normal. We're continuing to process backlogged requests.Mar 21, 16:53 UTC Identified - We identified and resolved a rate limiting issue and are now scaling up to process the backlog of queued requests. An update will follow shortly.Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 16:53 UTC Identified - We identified and resolved a rate limiting issue and are now scaling up to process the backlog of queued requests. An update will follow shortly.Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

VM provisioning taking longer than usual

Mar 21, 16:34 UTC Investigating - We've observed longer than usual wait times to create Docker and Machine Executor resources. We're investigating and will provide an update soon.

Last Update: A few months ago

Updates to GitHub Checks delayed

Mar 21, 14:04 UTC Resolved - During a 20 minute period, updates to GitHub Checks were delayed for up to 8 minutes. A deploy with a logging change caused a performance slowdown. The deploy has been rolled back and we will perform additional performance testing.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 22:00 UTC Resolved - We have worked thru the remaining backlog and all of our systems are operational. Thank you for your patience as we dealt with this.Mar 15, 21:37 UTC Update - We are continuing to monitor as we drain the remainder of the backlog. Most requests are processing with minimal delay. Some continued network issues are impacting overall capacity. We will update within 30 mins.Mar 15, 21:06 UTC Monitoring - We are now processing the backlog of workflows. We are monitoring closely to ensure that all operations are happening as expected. Will update again within 20 minutes as we continue to make progress.Mar 15, 20:57 UTC Update - Workflows are once again running and we are monitoring closely as our demand scales. We will update within 20 mins with progress against our backlog of work.Mar 15, 20:42 UTC Update - We continue to work on recovering full workflow execution. We are now receiving inbound hooks and those workflows should run once we are fully back online. We will update within 20 mins with any additional progress.Mar 15, 20:19 UTC Update - Our UI is back online and we are working to recover other systems. We are actively working to recover workflows and scaling up capacity to support a quick recovery. Will update within the next 20 mins with further information.Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 21:37 UTC Update - We are continuing to monitor as we drain the remainder of the backlog. Most requests are processing with minimal delay. Some continued network issues are impacting overall capacity. We will update within 30 mins.Mar 15, 21:06 UTC Monitoring - We are now processing the backlog of workflows. We are monitoring closely to ensure that all operations are happening as expected. Will update again within 20 minutes as we continue to make progress.Mar 15, 20:57 UTC Update - Workflows are once again running and we are monitoring closely as our demand scales. We will update within 20 mins with progress against our backlog of work.Mar 15, 20:42 UTC Update - We continue to work on recovering full workflow execution. We are now receiving inbound hooks and those workflows should run once we are fully back online. We will update within 20 mins with any additional progress.Mar 15, 20:19 UTC Update - Our UI is back online and we are working to recover other systems. We are actively working to recover workflows and scaling up capacity to support a quick recovery. Will update within the next 20 mins with further information.Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 21:06 UTC Monitoring - We are now processing the backlog of workflows. We are monitoring closely to ensure that all operations are happening as expected. Will update again within 20 minutes as we continue to make progress.Mar 15, 20:57 UTC Update - Workflows are once again running and we are monitoring closely as our demand scales. We will update within 20 mins with progress against our backlog of work.Mar 15, 20:42 UTC Update - We continue to work on recovering full workflow execution. We are now receiving inbound hooks and those workflows should run once we are fully back online. We will update within 20 mins with any additional progress.Mar 15, 20:19 UTC Update - Our UI is back online and we are working to recover other systems. We are actively working to recover workflows and scaling up capacity to support a quick recovery. Will update within the next 20 mins with further information.Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 20:57 UTC Update - Workflows are once again running and we are monitoring closely as our demand scales. We will update within 20 mins with progress against our backlog of work.Mar 15, 20:42 UTC Update - We continue to work on recovering full workflow execution. We are now receiving inbound hooks and those workflows should run once we are fully back online. We will update within 20 mins with any additional progress.Mar 15, 20:19 UTC Update - Our UI is back online and we are working to recover other systems. We are actively working to recover workflows and scaling up capacity to support a quick recovery. Will update within the next 20 mins with further information.Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 20:42 UTC Update - We continue to work on recovering full workflow execution. We are now receiving inbound hooks and those workflows should run once we are fully back online. We will update within 20 mins with any additional progress.Mar 15, 20:19 UTC Update - Our UI is back online and we are working to recover other systems. We are actively working to recover workflows and scaling up capacity to support a quick recovery. Will update within the next 20 mins with further information.Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 20:19 UTC Update - Our UI is back online and we are working to recover other systems. We are actively working to recover workflows and scaling up capacity to support a quick recovery. Will update within the next 20 mins with further information.Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 20:14 UTC Update - The earlier networking issue has created an ongoing issue with one of our core systems, resulting in impact across builds and our UI. We are actively working to recover that component and are starting to recover dependent systems. Will update in 20 mins on the status of the recovery.Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 20:03 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 19:21 UTC Update - The issue has been identified and a fix is being implemented.Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 19:10 UTC Update - We are continuing to recover from the AWS networking eventMar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 18:41 UTC Identified - We are recovering from an AWS networking event and we are scaling up to meet the load.Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Workflows are unable to start

Mar 15, 18:25 UTC Update - We are continuing to investigate this issue.Mar 15, 18:25 UTC Investigating - We are experiencing an interruption in our ability to run Workflows. We are investigating.

Last Update: A few months ago

Intermittent 504s on our Marketing Website

Feb 12, 02:42 UTC Resolved - This incident has been resolved.

Last Update: A few months ago

The Orb Registry is down

Jan 25, 08:00 UTC Resolved - This incident has been resolved.

Last Update: A few months ago

Seeing 504s when attempting to load the Orb Registry

Feb 15, 02:11 UTC Resolved - This incident has been resolved.

Last Update: A few months ago

Intermittent 504s on our Marketing Website

Feb 12, 06:00 UTC Resolved - This incident has been resolved.

Last Update: A few months ago

Scheduled Workflows Unable to Run

Feb 22, 18:32 UTC Resolved - Starting at 16:30 UTC and ending at 18:32 UTC we experienced an issue preventing scheduled workflows from running. We've been able to correct the issue and workflows scheduled to run after 18:32 UTC are running normally.

Last Update: A few months ago

Intermittent 504s on our Marketing Website

Feb 11, 22:00 UTC Resolved - This incident has been resolved.

Last Update: A few months ago

Workflows Delays

Feb 28, 21:04 UTC Resolved - This incident has been resolved.Feb 28, 20:53 UTC Monitoring - We have scaled up the part of our system that interacts with AWS S3 and this has resolved the issue. We are currently monitoring the fix.Feb 28, 20:41 UTC Identified - We have identified the source as AWS S3 retrieval latency and are exploring options to mitigate the impactFeb 28, 20:20 UTC Investigating - We are seeing a slowdown in the response from AWS S3 which is causing Workflows to be slower than usual

Last Update: A few months ago

Workflows Delays

Feb 28, 20:53 UTC Monitoring - We have scaled up the part of our system that interacts with AWS S3 and this has resolved the issue. We are currently monitoring the fix.Feb 28, 20:41 UTC Identified - We have identified the source as AWS S3 retrieval latency and are exploring options to mitigate the impactFeb 28, 20:20 UTC Investigating - We are seeing a slowdown in the response from AWS S3 which is causing Workflows to be slower than usual

Last Update: A few months ago

Workflows Delays

Feb 28, 20:41 UTC Identified - We have identified the source as AWS S3 retrieval latency and are exploring options to mitigate the impactFeb 28, 20:20 UTC Investigating - We are seeing a slowdown in the response from AWS S3 which is causing Workflows to be slower than usual

Last Update: A few months ago

Workflows Delays

Feb 28, 20:20 UTC Investigating - We are seeing a slowdown in the response from AWS S3 which is causing Workflows to be slower than usual

Last Update: A few months ago

Github Checks Delayed

Feb 27, 21:31 UTC Resolved - We are continuing to observe our systems but the delay in Github Checks has normalized.Feb 27, 21:06 UTC Monitoring - We have reduced the delay and are monitoring for increases in the delay of Github Checks.Feb 27, 20:44 UTC Identified - We've identified an issue causing Github Checks to be delayed and are working to correct the issue.

Last Update: A few months ago

Github Checks Delayed

Feb 27, 21:06 UTC Monitoring - We have reduced the delay and are monitoring for increases in the delay of Github Checks.Feb 27, 20:44 UTC Identified - We've identified an issue causing Github Checks to be delayed and are working to correct the issue.

Last Update: A few months ago

Github Checks Delayed

Feb 27, 20:44 UTC Identified - We've identified an issue causing Github Checks to be delayed and are working to correct the issue.

Last Update: A few months ago

Workflow runs may be delayed

Feb 26, 20:42 UTC Resolved - We have identified the cause of the delayed workflows. Workflow run times have returned to normal.Feb 26, 20:24 UTC Investigating - We've observed some workflows taking longer than expected to start. Our teams are investigating.

Last Update: A few months ago

Workflow runs may be delayed

Feb 26, 20:24 UTC Investigating - We've observed some workflows taking longer than expected to start. Our teams are investigating.

Last Update: A few months ago

Slow to process GitHub checks

Feb 26, 13:06 UTC Resolved - The incident has been resolved. We apologise for the inconvenience. We will be holding an incident review to further investigate the cause and improve our monitoring.Feb 26, 12:19 UTC Monitoring - New workflows are now reporting GitHub checks successfully. You may need to re-run previous workflows if their checks haven't been reported. We will continue to monitor the situation.Feb 26, 12:08 UTC Update - We are failing to report the status of GitHub checks and continuing to investigate the causeFeb 26, 11:39 UTC Investigating - We are investigating a problem that may cause GitHub checks to be processed slower than normal.

Last Update: A few months ago

Slow to process GitHub checks

Feb 26, 12:19 UTC Monitoring - New workflows are now reporting GitHub checks successfully. You may need to re-run previous workflows if their checks haven't been reported. We will continue to monitor the situation.Feb 26, 12:08 UTC Update - We are failing to report the status of GitHub checks and continuing to investigate the causeFeb 26, 11:39 UTC Investigating - We are investigating a problem that may cause GitHub checks to be processed slower than normal.

Last Update: A few months ago

Slow to process GitHub checks

Feb 26, 12:08 UTC Update - We are failing to report the status of GitHub checks and continuing to investigate the causeFeb 26, 11:39 UTC Investigating - We are investigating a problem that may cause GitHub checks to be processed slower than normal.

Last Update: A few months ago

Slow to process GitHub checks

Feb 26, 11:39 UTC Investigating - We are investigating a problem that may cause GitHub checks to be processed slower than normal.

Last Update: A few months ago

MacOS Network Maintenance

Feb 25, 05:00 UTC Completed - The scheduled maintenance has been completed.Feb 25, 04:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 20, 14:45 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. No impact is expected.

Last Update: A few months ago

MacOS Network Maintenance

Feb 25, 04:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 20, 14:45 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. No impact is expected.

Last Update: A few months ago

Unable to pull docker images at job start

Feb 22, 22:29 UTC Resolved - The upstream provider is no longer experiencing issues, and users are able to pull and push container images without issue. Thank you for your patience.Feb 22, 22:20 UTC Monitoring - Users are no longer experiencing issues accessing container registries. We are monitoring the platform for stability and performance.Feb 22, 22:16 UTC Update - Our ability to reference or pull from an upstream container registry is degraded. We are investigating to determine the scope of the issue. We will provide an update within 20 minutes.Feb 22, 21:54 UTC Update - We are continuing to investigate a problem that causes docker registry access to fail intermittently at job start timeFeb 22, 21:33 UTC Investigating - We are investigating an issue that prevents docker from pulling images at job start time.

Last Update: A few months ago

Unable to pull docker images at job start

Feb 22, 22:20 UTC Monitoring - Users are no longer experiencing issues accessing container registries. We are monitoring the platform for stability and performance.Feb 22, 22:16 UTC Update - Our ability to reference or pull from an upstream container registry is degraded. We are investigating to determine the scope of the issue. We will provide an update within 20 minutes.Feb 22, 21:54 UTC Update - We are continuing to investigate a problem that causes docker registry access to fail intermittently at job start timeFeb 22, 21:33 UTC Investigating - We are investigating an issue that prevents docker from pulling images at job start time.

Last Update: A few months ago

Unable to pull docker images at job start

Feb 22, 22:16 UTC Update - Our ability to reference or pull from an upstream container registry is degraded. We are investigating to determine the scope of the issue. We will provide an update within 20 minutes.Feb 22, 21:54 UTC Update - We are continuing to investigate a problem that causes docker registry access to fail intermittently at job start timeFeb 22, 21:33 UTC Investigating - We are investigating an issue that prevents docker from pulling images at job start time.

Last Update: A few months ago

Unable to pull docker images at job start

Feb 22, 21:54 UTC Update - We are continuing to investigate a problem that causes docker registry access to fail intermittently at job start timeFeb 22, 21:33 UTC Investigating - We are investigating an issue that prevents docker from pulling images at job start time.

Last Update: A few months ago

Unable to pull docker images at job start

Feb 22, 21:33 UTC Investigating - We are investigating an issue that prevents docker from pulling images at job start time.

Last Update: A few months ago

Scheduled Workflows Unable to Run

Feb 22, 16:30 UTC Resolved - Starting at 16:30 UTC and ending at 18:32 UTC we experienced an issue preventing scheduled workflows from running. We've been able to correct the issue and workflows scheduled to run after 18:32 UTC are running normally.

Last Update: A few months ago

Job scheduling may be delayed

Feb 21, 23:17 UTC Resolved - This incident has been resolved. Jobs are running normally again.Feb 21, 23:01 UTC Monitoring - We have confirmed that the cause of the delayed jobs was network latency within one of our task clusters. We have moved jobs to an unaffected cluster and are seeing a recovery in run times.Feb 21, 22:47 UTC Identified - We have identified the cause of the delayed jobs and are applying a fix.Feb 21, 22:39 UTC Investigating - We are currently investigating an issue with 2.0 builds that may result in delayed runs.

Last Update: A few months ago

Job scheduling may be delayed

Feb 21, 23:01 UTC Monitoring - We have confirmed that the cause of the delayed jobs was network latency within one of our task clusters. We have moved jobs to an unaffected cluster and are seeing a recovery in run times.Feb 21, 22:47 UTC Identified - We have identified the cause of the delayed jobs and are applying a fix.Feb 21, 22:39 UTC Investigating - We are currently investigating an issue with 2.0 builds that may result in delayed runs.

Last Update: A few months ago

Job scheduling may be delayed

Feb 21, 22:47 UTC Identified - We have identified the cause of the delayed jobs and are applying a fix.Feb 21, 22:39 UTC Investigating - We are currently investigating an issue with 2.0 builds that may result in delayed runs.

Last Update: A few months ago

Scheduled jobs may be delayed

Feb 21, 22:47 UTC Identified - We have identified the cause of the delayed jobs and are applying a fix.Feb 21, 22:39 UTC Investigating - We are currently investigating an issue with 2.0 builds that may result in delayed runs.

Last Update: A few months ago

Scheduled jobs may be delayed

Feb 21, 22:39 UTC Investigating - We are currently investigating an issue with scheduled jobs that may result in delayed runs.

Last Update: A few months ago

MacOS Network Maintenance

THIS IS A SCHEDULED EVENT Feb 25, 04:00 - 05:00 UTC Feb 20, 14:45 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. No impact is expected.

Last Update: A few months ago

Network Maintenance

Feb 18, 05:00 UTC Completed - The scheduled maintenance has been completed.Feb 18, 04:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 6, 16:00 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. During this window there will be intermittent service interruptions that may cause build failures or queuing. We apologize for any inconvenience, and thank you for your patience.

Last Update: A few months ago

Network Maintenance

Feb 18, 04:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 6, 16:00 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. During this window there will be intermittent service interruptions that may cause build failures or queuing. We apologize for any inconvenience, and thank you for your patience.

Last Update: A few months ago

Network Maintenance

Feb 18, 04:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 6, 16:00 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. During this window there will be intermittent service interruptions that may cause build failures or queuing. We apologize for any inconvenience, and thank you for your patience.

Last Update: A few months ago

Users unable to log in

Feb 15, 21:05 UTC Resolved - The platform is stable and processing jobs normally. Thank you for your patience.Feb 15, 20:52 UTC Update - We are continuing to monitor our system for performance and stability. We will post an update within 20 minutes.Feb 15, 20:31 UTC Monitoring - We successfully rolled back a release that impacted out API. We have processed the backlog of jobs which are now being executed normally. We will continue to monitor the system for stability and performance.Feb 15, 20:17 UTC Identified - We have identified the cause of the issue and are implementing a fix. We will post an update within 20 minutes.Feb 15, 20:11 UTC Investigating - We are investigating issues with our API. Users may experience HTTP 5xx errors while attempting to log in to the web UI

Last Update: A few months ago

Users unable to log in

Feb 15, 20:52 UTC Update - We are continuing to monitor our system for performance and stability. We will post an update within 20 minutes.Feb 15, 20:31 UTC Monitoring - We successfully rolled back a release that impacted out API. We have processed the backlog of jobs which are now being executed normally. We will continue to monitor the system for stability and performance.Feb 15, 20:17 UTC Identified - We have identified the cause of the issue and are implementing a fix. We will post an update within 20 minutes.Feb 15, 20:11 UTC Investigating - We are investigating issues with our API. Users may experience HTTP 5xx errors while attempting to log in to the web UI

Last Update: A few months ago

Users unable to log in

Feb 15, 20:31 UTC Monitoring - We successfully rolled back a release that impacted out API. We have processed the backlog of jobs which are now being executed normally. We will continue to monitor the system for stability and performance.Feb 15, 20:17 UTC Identified - We have identified the cause of the issue and are implementing a fix. We will post an update within 20 minutes.Feb 15, 20:11 UTC Investigating - We are investigating issues with our API. Users may experience HTTP 5xx errors while attempting to log in to the web UI

Last Update: A few months ago

Users unable to log in

Feb 15, 20:17 UTC Identified - We have identified the cause of the issue and are implementing a fix. We will post an update within 20 minutes.Feb 15, 20:11 UTC Investigating - We are investigating issues with our API. Users may experience HTTP 5xx errors while attempting to log in to the web UI

Last Update: A few months ago

Users unable to log in

Feb 15, 20:11 UTC Investigating - We are investigating issues with our API. Users may experience HTTP 5xx errors while attempting to log in to the web UI

Last Update: A few months ago

CircleCI build slowdown

Feb 10, 19:39 UTC Resolved - We could not find any indication of slow workflow starts, marking this as resolvedFeb 10, 19:07 UTC Investigating - Some workflows and jobs may be slow to start. We are looking into if this is an issue.

Last Update: A few months ago

CircleCI build slowdown

Feb 10, 19:07 UTC Investigating - Some workflows and jobs may be slow to start. We are looking into if this is an issue.

Last Update: A few months ago

CircleCI build slowdown

Feb 10, 19:07 UTC Investigating - Some workflows and jobs may be slow to start. We are looking into if this is an issue.

Last Update: A few months ago

Delays Running Builds

Feb 6, 23:03 UTC Resolved - This incident has been resolved.Feb 6, 22:43 UTC Monitoring - We have worked through our backlog of builds and new builds should be processed promptlyFeb 6, 22:20 UTC Investigating - We are investigating delays in build processing that are causing excess queueing time for builds.

Last Update: A few months ago

Delays Running Builds

Feb 6, 22:43 UTC Monitoring - We have worked through our backlog of builds and new builds should be processed promptlyFeb 6, 22:20 UTC Investigating - We are investigating delays in build processing that are causing excess queueing time for builds.

Last Update: A few months ago

Delays Running Builds

Feb 6, 22:20 UTC Investigating - We are investigating delays in build processing that are causing excess queueing time for builds.

Last Update: A few months ago

Workflows jobs are delayed

Feb 6, 19:41 UTC Resolved - Workflows are being processed normally again.Feb 6, 19:16 UTC Monitoring - We have implemented a fix and workflows are being processed normally. We are now closely monitoring Workflows to ensure system stability.Feb 6, 19:10 UTC Investigating - We are currently investigating an unexpected slowdown in the processing of Workflows.

Last Update: A few months ago

Workflows jobs are delayed

Feb 6, 19:16 UTC Monitoring - We have implemented a fix and workflows are being processed normally. We are now closely monitoring Workflows to ensure system stability.Feb 6, 19:10 UTC Investigating - We are currently investigating an unexpected slowdown in the processing of Workflows.

Last Update: A few months ago

Workflows jobs are delayed

Feb 6, 19:10 UTC Investigating - We are currently investigating an unexpected slowdown in the processing of Workflows.

Last Update: A few months ago

Network Maintenance

THIS IS A SCHEDULED EVENT Feb 18, 04:00 - 05:00 UTC Feb 6, 16:00 UTC Scheduled - The colocation facility that hosts our macOS 2.0 build environment will be conducting network maintenance. During this window there will be intermittent service interruptions that may cause build failures or queuing. We apologize for any inconvenience, and thank you for your patience.

Last Update: A few months ago

Errors Processing Circle 2.1 Builds

Jan 11, 23:44 UTC Resolved - CircleCI 2.1 builds are being processed normally again.Jan 11, 23:21 UTC Update - We are continuing to monitor system stability.Jan 11, 22:51 UTC Monitoring - We've finished rolling out the fix and are not seeing any further errors with 2.1 builds. We will continue to monitor the build system to confirm stability.Jan 11, 22:42 UTC Identified - We've identified the cause of this issue and are rolling out a fix.Jan 11, 22:39 UTC Investigating - We are currently investigating why CircleCI Builds using 2.1 are failing with the message "# Unexpected exception processing config". We will provide an update in 20 minutes or when we have further information.

Last Update: A few months ago

Errors Processing Circle 2.1 Builds

Jan 11, 23:21 UTC Update - We are continuing to monitor system stability.Jan 11, 22:51 UTC Monitoring - We've finished rolling out the fix and are not seeing any further errors with 2.1 builds. We will continue to monitor the build system to confirm stability.Jan 11, 22:42 UTC Identified - We've identified the cause of this issue and are rolling out a fix.Jan 11, 22:39 UTC Investigating - We are currently investigating why CircleCI Builds using 2.1 are failing with the message "# Unexpected exception processing config". We will provide an update in 20 minutes or when we have further information.

Last Update: A few months ago

Errors Processing Circle 2.1 Builds

Jan 11, 22:51 UTC Monitoring - We've finished rolling out the fix and are not seeing any further errors with 2.1 builds. We will continue to monitor the build system to confirm stability.Jan 11, 22:42 UTC Identified - We've identified the cause of this issue and are rolling out a fix.Jan 11, 22:39 UTC Investigating - We are currently investigating why CircleCI Builds using 2.1 are failing with the message "# Unexpected exception processing config". We will provide an update in 20 minutes or when we have further information.

Last Update: A few months ago

Errors Processing Circle 2.1 Builds

Jan 11, 22:42 UTC Identified - We've identified the cause of this issue and are rolling out a fix.Jan 11, 22:39 UTC Investigating - We are currently investigating why CircleCI Builds using 2.1 are failing with the message "# Unexpected exception processing config". We will provide an update in 20 minutes or when we have further information.

Last Update: A few months ago

Errors Processing Circle 2.1 Builds

Jan 11, 22:39 UTC Investigating - We are currently investigating why CircleCI Builds using 2.1 are failing with the message "# Unexpected exception processing config". We will provide an update in 20 minutes or when we have further information.

Last Update: A few months ago

Slow workflow run times and UI response times

Jan 10, 17:28 UTC Resolved - We have put in measures to address underlying resource constraints that we believe caused the issue. All our services are now operational, and we will continue to monitor for any degradation. This incident has been resolved, thank you for your patience.Jan 10, 17:06 UTC Update - We are continuing to monitor our services, and understand the underlying source of the issue. We will update again in 20 minutes.Jan 10, 16:39 UTC Monitoring - We have recovered from the degraded workflow UI and run times, but are still investigating the cause of the problem.Jan 10, 16:15 UTC Investigating - We are investigating degraded workflow UI response times and degraded workflow run times.

Last Update: A few months ago

Slow workflow run times and UI response times

Jan 10, 17:06 UTC Update - We are continuing to monitor our services, and understand the underlying source of the issue. We will update again in 20 minutes.Jan 10, 16:39 UTC Monitoring - We have recovered from the degraded workflow UI and run times, but are still investigating the cause of the problem.Jan 10, 16:15 UTC Investigating - We are investigating degraded workflow UI response times and degraded workflow run times.

Last Update: A few months ago

Slow workflow run times and UI response times

Jan 10, 16:39 UTC Monitoring - We have recovered from the degraded workflow UI and run times, but are still investigating the cause of the problem.Jan 10, 16:15 UTC Investigating - We are investigating degraded workflow UI response times and degraded workflow run times.

Last Update: A few months ago

Slow workflow run times and UI response times

Jan 10, 16:15 UTC Investigating - We are investigating degraded workflow UI response times and degraded workflow run times.

Last Update: A few months ago

Degraded machine executor and remote docker jobs

Jan 8, 22:09 UTC Resolved - This incident has been resolved.Jan 8, 21:44 UTC Update - We are still processing a backlog of remote-docker jobs. We will provide an update within 20 minutes.Jan 8, 21:23 UTC Update - We are still processing a backlog of remote-docker jobs. We will provide an update within 20 minutes.Jan 8, 21:03 UTC Monitoring - The provider has recovered and we are now processing the backlog of jobs.Jan 8, 20:39 UTC Identified - One of our cloud providers is having difficulty fulfilling VM requests. Users may experience degraded performance and timeouts for machine executor and remote docker jobs.

Last Update: A few months ago

Degraded machine executor and remote docker jobs

Jan 8, 21:44 UTC Update - We are still processing a backlog of remote-docker jobs. We will provide an update within 20 minutes.Jan 8, 21:23 UTC Update - We are still processing a backlog of remote-docker jobs. We will provide an update within 20 minutes.Jan 8, 21:03 UTC Monitoring - The provider has recovered and we are now processing the backlog of jobs.Jan 8, 20:39 UTC Identified - One of our cloud providers is having difficulty fulfilling VM requests. Users may experience degraded performance and timeouts for machine executor and remote docker jobs.

Last Update: A few months ago

Degraded machine executor and remote docker jobs

Jan 8, 21:23 UTC Update - We are still processing a backlog of remote-docker jobs. We will provide an update within 20 minutes.Jan 8, 21:03 UTC Monitoring - The provider has recovered and we are now processing the backlog of jobs.Jan 8, 20:39 UTC Identified - One of our cloud providers is having difficulty fulfilling VM requests. Users may experience degraded performance and timeouts for machine executor and remote docker jobs.

Last Update: A few months ago

Degraded machine executor and remote docker jobs

Jan 8, 21:03 UTC Monitoring - The provider has recovered and we are now processing the backlog of jobs.Jan 8, 20:39 UTC Identified - One of our cloud providers is having difficulty fulfilling VM requests. Users may experience degraded performance and timeouts for machine executor and remote docker jobs.

Last Update: A few months ago

Degraded machine executor and remote docker jobs

Jan 8, 20:39 UTC Identified - One of our cloud providers is having difficulty fulfilling VM requests. Users may experience degraded performance and timeouts for machine executor and remote docker jobs.

Last Update: A few months ago

Users may experience issues attempting to access organizations

Jan 4, 18:12 UTC Resolved - This incident has been resolved.Jan 4, 17:47 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 4, 17:42 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Users may experience issues attempting to access organizations

Jan 4, 17:47 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 4, 17:42 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Users may experience issues attempting to access organizations

Jan 4, 17:42 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 21:11 UTC Resolved - We are no longer seeing any issues with Workflows. Thank you for your patience with this incident.Jan 3, 21:00 UTC Update - We are continuing to monitor the backlog of Workflows. We will update in 20 minutes.Jan 3, 20:15 UTC Monitoring - We have implemented a fix and are monitoring as the system works through its workflows backlog.Jan 3, 20:08 UTC Identified - We have identified the the reason why workflows processing has been delayed. We will update again in 20 minutes.Jan 3, 19:55 UTC Investigating - We are investigating the cause of this issue and will continue to provide information about our progress.Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 21:00 UTC Update - We are continuing to monitor the backlog of Workflows. We will update in 20 minutes.Jan 3, 20:15 UTC Monitoring - We have implemented a fix and are monitoring as the system works through its workflows backlog.Jan 3, 20:08 UTC Identified - We have identified the the reason why workflows processing has been delayed. We will update again in 20 minutes.Jan 3, 19:55 UTC Investigating - We are investigating the cause of this issue and will continue to provide information about our progress.Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 20:15 UTC Monitoring - We have implemented a fix and are monitoring as the system works through its workflows backlog.Jan 3, 20:08 UTC Identified - We have identified the the reason why workflows processing has been delayed. We will update again in 20 minutes.Jan 3, 19:55 UTC Investigating - We are investigating the cause of this issue and will continue to provide information about our progress.Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 20:08 UTC Identified - We have identified the the reason why workflows processing has been delayed. We will update again in 20 minutes.Jan 3, 19:55 UTC Investigating - We are investigating the cause of this issue and will continue to provide information about our progress.Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 19:55 UTC Investigating - We are investigating the cause of this issue and will continue to provide information about our progress.Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 19:55 UTC Investigating - We are investigating the cause of this issue and will continue to provide information about our progress.Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 19:25 UTC Identified - We have identified the reason why Workflows were not being processed. We will update again in 20 minutes.Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

Workflow Slow Processing

Jan 3, 19:05 UTC Investigating - We are investigating slower than usual workflow processing time. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI UI unavailable

Jan 2, 18:40 UTC Resolved - The platform is stable, and jobs are being processed normally. Incident resolved.Jan 2, 18:21 UTC Monitoring - The platform has recovered, and we are monitoring the system for stability and performance. We will provide an update within 20 minutes.Jan 2, 18:07 UTC Investigating - We are investigating a major outage on our platform that affects multiple components. Our engineers are working to determine the cause and will provide an update within 20 minutes.

Last Update: A few months ago

CircleCI UI unavailable

Jan 2, 18:21 UTC Monitoring - The platform has recovered, and we are monitoring the system for stability and performance. We will provide an update within 20 minutes.Jan 2, 18:07 UTC Investigating - We are investigating a major outage on our platform that affects multiple components. Our engineers are working to determine the cause and will provide an update within 20 minutes.

Last Update: A few months ago

CircleCI UI unavailable

Jan 2, 18:07 UTC Investigating - We are investigating a major outage on our platform that affects multiple components. Our engineers are working to determine the cause and will provide an update within 20 minutes.

Last Update: A few months ago

CircleCI UI unavailable

Jan 2, 18:07 UTC Investigating - We are investigating a major outage on our platform that affects multiple components. Our engineers are working to determine the cause and will provide an update within 20 minutes.

Last Update: A few months ago

UI Not loading

Dec 13, 16:31 UTC Resolved - The Incident is resolved. Thank you for your patience.Dec 13, 16:29 UTC Update - We are continuing to monitor the platform for stability and performance. We will provide an update within 20 minutes.Dec 13, 16:05 UTC Monitoring - Our engineers have rolled back the breaking change, and the UI is accessible again. We will continue to monitor the platform for stability and performance.Dec 13, 15:53 UTC Identified - We have identified a problem that prevents our UI from loading. Our engineers are in the process of rolling back the breaking change.

Last Update: A few months ago

UI Not loading

Dec 13, 16:29 UTC Update - We are continuing to monitor the platform for stability and performance. We will provide an update within 20 minutes.Dec 13, 16:05 UTC Monitoring - Our engineers have rolled back the breaking change, and the UI is accessible again. We will continue to monitor the platform for stability and performance.Dec 13, 15:53 UTC Identified - We have identified a problem that prevents our UI from loading. Our engineers are in the process of rolling back the breaking change.

Last Update: A few months ago

UI Not loading

Dec 13, 16:05 UTC Monitoring - Our engineers have rolled back the breaking change, and the UI is accessible again. We will continue to monitor the platform for stability and performance.Dec 13, 15:53 UTC Identified - We have identified a problem that prevents our UI from loading. Our engineers are in the process of rolling back the breaking change.

Last Update: A few months ago

UI Not loading

Dec 13, 15:53 UTC Identified - We have identified a problem that prevents our UI from loading. Our engineers are in the process of rolling back the breaking change.

Last Update: A few months ago

Test results processing outage

Dec 7, 03:27 UTC Resolved - The Test Results Processor has been stabilized and the incident resolved. We thank you for your patience.Dec 7, 03:13 UTC Update - We are continuing to monitor the test results processor.Dec 7, 02:36 UTC Monitoring - We've scaled up our test results processor to handle increased demand and are currently monitoring system stability.Dec 7, 02:30 UTC Investigating - We are investigating an outage causing test results processing to be delayed.

Last Update: A few months ago

Test results processing outage

Dec 7, 03:13 UTC Update - We are continuing to monitor the test results processor.Dec 7, 02:36 UTC Monitoring - We've scaled up our test results processor to handle increased demand and are currently monitoring system stability.Dec 7, 02:30 UTC Investigating - We are investigating an outage causing test results processing to be delayed.

Last Update: A few months ago

Test results processing outage

Dec 7, 02:36 UTC Monitoring - We've scaled up our test results processor to handle increased demand and are currently monitoring system stability.Dec 7, 02:30 UTC Investigating - We are investigating an outage causing test results processing to be delayed.

Last Update: A few months ago

Test results processing outage

Dec 7, 02:30 UTC Investigating - We are investigating an outage causing test results processing to be delayed.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 23:46 UTC Resolved - All jobs are now being processed normally, and the incident is resolved. Thank you for your patience.Nov 28, 23:19 UTC Update - We are still processing the backlog of macOS 2.0 jobs for Xcode 9.4.0 and Xcode 94.1. Other CircleCI 2.0 jobs are otherwise being processed normally. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 22:49 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 22:20 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:51 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:31 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 23:19 UTC Update - We are still processing the backlog of macOS 2.0 jobs for Xcode 9.4.0 and Xcode 94.1. Other CircleCI 2.0 jobs are otherwise being processed normally. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 22:49 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 22:20 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:51 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:31 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 22:49 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 22:20 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:51 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:31 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 22:20 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:51 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:31 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 21:51 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 30 minutes.Nov 28, 21:31 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 21:31 UTC Update - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 21:11 UTC Monitoring - Service has been restored and we are processing the backlog of queued jobs. Our engineers are monitoring the system for performance and stability. We will provide an update within 20 minutes.Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 21:00 UTC Identified - We believe have identified the cause of the failure and our engineers are working on a resolution. We will post an update within 20 minutes.Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 20:51 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 20:32 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 20:13 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 19:53 UTC Update - Our engineers are working to determine the cause of the failure. We will post an update within 20 minutes.Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

2.0 Jobs not running

Nov 28, 19:34 UTC Investigating - Our engineers are investigating an issue that is preventing 2.0 jobs from running.

Last Update: A few months ago

We are in investigating elevated error rates in the UI when viewing workflows

Nov 28, 15:45 UTC Resolved - issues causing elevated errors in workflows UI have been identified and resolvedNov 28, 15:35 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

We are in investigating elevated error rates in the UI when viewing workflows

Nov 28, 15:35 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Some Docker Hub Operations are degraded

Nov 24, 23:09 UTC Resolved - We are seeing very few issues but still they are happening. We are closing the incident but if we see a spike we will re-open it.Nov 24, 21:10 UTC Update - Apologies for the "fix has been implemented" message, it has not - I blame the pumpkin pie! We are monitoring the situation and will updateNov 24, 21:08 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 24, 21:07 UTC Identified - Our Support team has identified that some operations with Docker Hub are failing. We will monitor Docker Hub and notifiy you when it has resolved.

Last Update: A few months ago

Some Docker Hub Operations are degraded

Nov 24, 21:10 UTC Update - Apologies for the "fix has been implemented" message, it has not - I blame the pumpkin pie! We are monitoring the situation and will updateNov 24, 21:08 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 24, 21:07 UTC Identified - Our Support team has identified that some operations with Docker Hub are failing. We will monitor Docker Hub and notifiy you when it has resolved.

Last Update: A few months ago

Some Docker Hub Operations are degraded

Nov 24, 21:08 UTC Monitoring - A fix has been implemented and we are monitoring the results.Nov 24, 21:07 UTC Identified - Our Support team has identified that some operations with Docker Hub are failing. We will monitor Docker Hub and notifiy you when it has resolved.

Last Update: A few months ago

Elevated errors loading CircleCI UI

Nov 16, 10:30 UTC Resolved - A bug was introduced into our frontend code at approximately 10:30 UTC on Nov 16th, causing some of our dashboard pages to fail to load for some users. Affected users would see a blank screen when trying to view the list of their jobs or workflows. The frontend bug was removed from production around 8:55 UTC on Nov 17th, which resolved the issue.

Last Update: A few months ago

2.0 builds queueing

Nov 15, 17:18 UTC Resolved - This incident has been resolved.Nov 15, 17:07 UTC Update - We are continuing to monitor for any further issues.Nov 15, 16:57 UTC Monitoring - We have identified and resolved the issue, queue time is returning to normal and we are monitoring our fleet.Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: A few months ago

2.0 builds queueing

Nov 15, 17:07 UTC Update - We are continuing to monitor for any further issues.Nov 15, 16:57 UTC Monitoring - We have identified and resolved the issue, queue time is returning to normal and we are monitoring our fleet.Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: A few months ago

2.0 builds queueing

Nov 15, 16:57 UTC Monitoring - We have identified and resolved the issue, queue time is returning to normal and we are monitoring our fleet.Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: A few months ago

2.0 builds queueing

Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: A few months ago

Github Checks Status Updates

Oct 31, 19:13 UTC Resolved - We are no longer seeing any delay and thank you for your patience as we worked out this incident.Oct 31, 18:45 UTC Monitoring - We've identified the issue causing Github Checks to be delayed and have implemented a fix. We are now monitoring the delay and will provide continuing updates.Oct 31, 18:22 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 31, 18:45 UTC Monitoring - We've identified the issue causing Github Checks to be delayed and have implemented a fix. We are now monitoring the delay and will provide continuing updates.Oct 31, 18:22 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 31, 18:22 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 31, 00:06 UTC Resolved - At this time we are considering this incident resolved and we thank you for your patience during this time.Oct 30, 23:38 UTC Update - The delay has been resolved however we are continuing to investigate this incident and will continue to monitor Github Checks.Oct 30, 23:28 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 23:38 UTC Update - The delay has been resolved however we are continuing to investigate this incident and will continue to monitor Github Checks.Oct 30, 23:28 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 23:28 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

Github Checks Status Updates

Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: A few months ago

GitHub Checks Status updates

Oct 30, 20:08 UTC Resolved - GitHub Checks notifications are being delivered with no delay. Thank you for your patience.Oct 30, 19:44 UTC Monitoring - The delivery of GitHub Checks Status notifications is back to normal levelsOct 30, 19:26 UTC Investigating - We are currently investigating a delay in the delivery of GitHub Checks Status notifications.

Last Update: A few months ago

GitHub Checks Status updates

Oct 30, 19:44 UTC Monitoring - The delivery of GitHub Checks Status notifications is back to normal levelsOct 30, 19:26 UTC Investigating - We are currently investigating a delay in the delivery of GitHub Checks Status notifications.

Last Update: A few months ago

GitHub Checks Status updates

Oct 30, 19:26 UTC Investigating - We are currently investigating a delay in the delivery of GitHub Checks Status notifications.

Last Update: A few months ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 16:11 UTC Resolved - The Docker Hub issues have been resolved, we are no longer seeing issues pushing or pulling from Docker Hub in CircleCI jobs.Oct 30, 15:46 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5bd82da144d76c04b854c3a4

Last Update: A few months ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 15:46 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5bd82da144d76c04b854c3a4

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Oct 30, 14:29 UTC Resolved - Provisioning times remain stable at normal levels. Thank you for your patience.Oct 30, 13:55 UTC Monitoring - Provisioning times have returned to normal levels. We will continue to monitor for twenty minutes.Oct 30, 13:30 UTC Identified - We have isolated the fault to a database and are working to mitigate the problem.Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Oct 30, 13:55 UTC Monitoring - Provisioning times have returned to normal levels. We will continue to monitor for twenty minutes.Oct 30, 13:30 UTC Identified - We have isolated the fault to a database and are working to mitigate the problem.Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Oct 30, 13:30 UTC Identified - We have isolated the fault to a database and are working to mitigate the problem.Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: A few months ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 10:53 UTC Resolved - The Docker Hub issues have been resolved, we are no longer seeing issues pushing or pulling from Docker Hub in CircleCI jobs.Oct 30, 10:35 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/533c6539221ae15e3f000031 for up-to-date status information

Last Update: A few months ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 10:35 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/533c6539221ae15e3f000031 for up-to-date status information

Last Update: A few months ago

Production Database Maintenance

Oct 27, 20:46 UTC Completed - We have completed all planned maintenance and would like to thank you for your patienceOct 27, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Oct 26, 20:19 UTC Scheduled - We will be working on each of our production databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: A few months ago

Production Database Maintenance

Oct 27, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Oct 26, 20:19 UTC Scheduled - We will be working on each of our production databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: A few months ago

Production Database Maintenance

THIS IS A SCHEDULED EVENT Oct 27, 18:00 - 22:00 UTC Oct 26, 20:19 UTC Scheduled - We will be working on each of our production databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: A few months ago

GitHub Outage

Oct 22, 23:25 UTC Resolved - GitHub has resolved the issue, we are no longer seeing any abnormal load and all queues are normal. Thank you for your patience as we worked thru this set of events.Oct 22, 22:27 UTC Update - Per GitHub: Webhook deliveries have caught up. We will continue to monitor and maintain capacity as we work thru the backlog of jobsOct 22, 22:03 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 22:27 UTC Update - Per GitHub: Webhook deliveries have caught up. We will continue to monitor and maintain capacity as we work thru the backlog of jobsOct 22, 22:03 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 22:03 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

GitHub Outage

Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: A few months ago

CircleCI site-wide outage

Jul 19, 22:44 UTC Resolved - All of our operations have returned to normal levels, thank you again for your patience as we worked thru this issue.Jul 19, 22:17 UTC Update - We are back to normal operations but we will monitor for another 20 minutes. Thank you for your patience as we worked thru this.Jul 19, 22:15 UTC Update - We are continuing to monitor for any further issues.Jul 19, 21:57 UTC Update - We are continuing to monitor as we work thru our remaining backlog during the recovery.Jul 19, 21:34 UTC Update - We are continuing to monitor and work through remediation efforts as our systems recover.Jul 19, 21:13 UTC Monitoring - We're continuing to monitor and work on further remediation efforts as our systems recover.Jul 19, 21:06 UTC Update - We are continuing to work on restoring normal service.Jul 19, 20:44 UTC Update - We are continuing to work on restoring normal service.Jul 19, 20:23 UTC Update - We've restored partial service and some builds are now running in our systems. We expect builds to be degraded throughout the day as we process our backlog.Jul 19, 20:00 UTC Update - We are continuing to work on restoring builds.Jul 19, 19:39 UTC Update - We are continuing to work on restoring builds.Jul 19, 19:16 UTC Update - We are continuing to work on restoring builds, we appreciate your patience as we work through this.Jul 19, 18:50 UTC Update - We are continuing to work on restoring buildsJul 19, 18:28 UTC Update - We are continuing to work on a fix for this issue. The CircleCI website and UI are partially up, and we are working to restore builds.Jul 19, 17:59 UTC Update - We are continuing to work on fixing the underlying issue. One of our database clusters is unavailable at the moment, and we are working to restore service. We expect builds to be degraded throughout the day as we bring services back up and process our backlog.Jul 19, 17:40 UTC Update - We are continuing to work on a fix for this issue.Jul 19, 17:20 UTC Update - We are continuing to work on a fix for this issue.Jul 19, 16:59 UTC Identified - We have identified the source of the issue and are working on rolling out a fixJul 19, 16:56 UTC Investigating - We are currently investigating this issue

Last Update: A few months ago

Workflows UI Degraded Performance

Jul 25, 15:30 UTC Resolved - The Workflows UI suffered an issue where any workflow that has a pending approval step would fail to load. We have corrected the issue and Workflows should properly load again. Started: 14:35 UTC Identified: 15:12 UTC Resolved: 15:27 UTC

Last Update: A few months ago

Degradation of service for Github hosted projects

Aug 1, 16:56 UTC Resolved - This incident has been resolved.Aug 1, 16:31 UTC Monitoring - Github API response rates have returned to normal. We are continuing to monitor status.Aug 1, 16:19 UTC Identified - Github API responses are continuing to experience elevated failure rates, Github is currently reporting degraded performance. We will continue to update every 20 minutes.Aug 1, 15:55 UTC Investigating - We are observing increased failure rates in API calls to GitHub, we will update again in 20 minutes.

Last Update: A few months ago

Unauthorized Status on Workflows utilizing contexts

Aug 1, 17:00 UTC Resolved - What happened?CircleCI is working on a feature to add secure permissions to contexts. As part of the rollout, we added permissions to all contexts. The permissions check did not handle some cases:- Customers who have never logged in to CircleCI and are using contexts- Customers who have scheduled workflowsIn these two cases, customers noticed “unauthorized” status on their workflows. The issues occurred between 19:00 and 21:00 UTC on 2018-07-31 and 14:00 and 17:00 UTC on 2018-08-01.What did we do?We have updated our services to accommodate for these cases. Our apologies for the inconvenience caused.

Last Update: A few months ago

Intermittent errors for Workflows UI

Aug 2, 00:04 UTC Resolved - Workflows UI error rates have remained below baseline. We are marking this incident as resolved.Aug 1, 23:40 UTC Monitoring - The error rate for Workflows UI has returned to baseline, we will monitor for 20 minutes to ensure it does not returnAug 1, 23:16 UTC Identified - We are monitoring an increase in Workflows UI errors

Last Update: A few months ago

MacOS Network Maintenance

Aug 5, 08:00 UTC Completed - The scheduled maintenance has been completed.Aug 5, 03:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 26, 15:46 UTC Scheduled - The colocation facility that hosts our macOS and macOS 2.0 build environments will be conducting network maintenance. During this window there will be intermittent service interruptions that may cause build failures or queuing. We apologize for any inconvenience, and thank you for your patience.

Last Update: A few months ago

Plan Settings UI

Aug 6, 23:51 UTC Resolved - We have not seen any signs of the issue remaining and are considering this as resolved. Thank you for your patience as we worked with our upstream provider to resolve this.Aug 6, 23:32 UTC Monitoring - We have coordinated a roll-back with our upstream provider and will monitor to ensure that the issue remains fixed.Aug 6, 22:50 UTC Update - We are working with our primary upstream provider to fix the permissions issue. A work-around is to change your Org visibility to Public. We will report back within 20 minutes.Aug 6, 22:11 UTC Update - We have identified that our permissions check system is showing elevated error rates from our primary upstream provider. A work-around is to change your Org visibility to Public. We are working with them to resolve the issue and will report back within 20 minutes.Aug 6, 21:38 UTC Identified - We have identified that our permissions check system is showing elevated error rates from our primary upstream provider. We are working with them to resolve the issue and will report back within 20 minutes.Aug 6, 21:12 UTC Update - We are looking into why some users have lost access to organizations they are a member of. We will update again in 20 minutes.Aug 6, 20:45 UTC Update - We are continuing to look into what is causing trouble with our Plans and Org settings UI. We will update again in 20 minutes.Aug 6, 20:20 UTC Investigating - Some organizations are currently unable to access their Plan Settings Pages. We are investigating the issue.

Last Update: A few months ago

Issues loading workflows UI

Aug 9, 22:03 UTC Postmortem - Yesterday we experienced an issue that caused workflows to be degraded for some customers for about an hour starting at 20:19 UTC , with service fully restored by 21:00 UTC .The event was triggered by a period of heightened packet loss and latency at an upstream provider. We began to notice increased API error rates within our system, which caused workflow jobs to not start. Once the upstream provider returned to normal activity, workflows began processing and we were able to clear the backlog, while continuing to closely monitor.Aug 7, 21:17 UTC Resolved - All services are stable. We would like to thank you for your patience. Incident resolved.Aug 7, 20:51 UTC Update - Jobs are being processed normally. We will continue to monitor our systems for stability.Aug 7, 20:36 UTC Monitoring - We're experiencing a network failure with an upstream provider and are monitoring the situation.Aug 7, 20:26 UTC Update - We are seeing increased API error rates on our front-end. Some jobs may experience queueing.Aug 7, 20:19 UTC Investigating - We are investigating an issue with the workflows UI.

Last Update: A few months ago

Contexts Outage

Aug 21, 17:11 UTC Resolved - This incident has been resolved now.Aug 21, 16:52 UTC Monitoring - We've implemented a fix and builds are succeeding again. Some delays on builds expected as we process the backlog. We are continuing to monitor systems for further issues.Aug 21, 16:31 UTC Identified - We have identified the cause of the issue and are working on rolling out a fix.Aug 21, 16:06 UTC Update - We are continuing to investigate this issue.Aug 21, 16:05 UTC Investigating - We are currently experiencing problems with the Contexts service and builds relying on this service will fail to run. We are investigating the cause of this issue and will provide further updates in 20 minutes.

Last Update: A few months ago

Planned Downtime on Docker Hub

Aug 25, 19:00 UTC Completed - The scheduled maintenance has been completed.Aug 25, 18:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 15, 05:29 UTC Scheduled - Container images will be unavailable for pulls from Docker Hub during this time. For more information, please see the upstream advisory -https://success.docker.com/article/planned-downtime-on-hub-cloud-store

Last Update: A few months ago

Slow API responses

Aug 31, 21:27 UTC Resolved - Our API response time has returned to normal, and our UI is loading correctly. Thank you for your patience.Aug 31, 21:14 UTC Monitoring - We are monitoring the UI for stability and performance, and will provide an update within 20 minutes.Aug 31, 21:06 UTC Investigating - We are investigating slow API responses which are impacting our UI

Last Update: A few months ago

Increased Bitbucket Permission Errors

Sep 4, 20:37 UTC Resolved - The issue has been resolved. Please contact support if you experience any further difficulty. Thank you for your patience.Sep 4, 20:17 UTC Monitoring - We have released an update that is expected to resolve the issue, and are monitoring the platform to confirm. We will provide an update within 20 minutes.Sep 4, 19:56 UTC Update - We are rolling an update that is expected to resolve the issue.Sep 4, 19:37 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 19:05 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 18:46 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 18:01 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 17:40 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 17:17 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 16:50 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 16:30 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 16:07 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 15:46 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 15:25 UTC Update - We are continuing to work on a fix for this issue, we will provide further updates as they are available.Sep 4, 15:00 UTC Update - We are continuing to work on a fix for this issue.Sep 4, 14:00 UTC Update - We've deployed some partial changes and are continuing to investigate. We will provide further updates as they are available.Sep 4, 11:00 UTC Identified - We've identified an issue with how we handle BitBucket Rate Limiting and are working to resolve the issue. We'll provide an update soon.Sep 4, 10:34 UTC Investigating - We're investigating reports of users being unable to access their project and organisation settings, and other pages which require elevated permissions. We will provide an update shortly.

Last Update: A few months ago

Long allocation time for VM and remote docker jobs

Sep 13, 19:41 UTC Resolved - This incident has been resolved. Please contact support if you experience any further difficulty. Thank you for your patience.Sep 13, 19:20 UTC Update - We are continuing to monitor our platform performance and stability as we work through the builds backlog. We will provide an update within 20 minutes.Sep 13, 18:54 UTC Update - We are continuing to monitor systems for stability and performance. Some delays expected as we work through the builds backlog. We will provide an update within 20 minutes.Sep 13, 18:33 UTC Update - We continue to monitor the platform for stability and performance and will provide an update within 20 minutes.Sep 13, 18:13 UTC Update - We have processed the outstanding jobs and provisioning times have returned to acceptable values. We will continue to monitor provisioning performance, and will provide an update within 20 minutes.Sep 13, 17:49 UTC Monitoring - We have processed the outstanding jobs and provisioning times have returned to acceptable values. We will continue to monitor provisioning performance, and will provide an update within 20 minutes.Sep 13, 17:25 UTC Update - We continue to process the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 13, 17:07 UTC Update - We continue to process the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 13, 16:47 UTC Update - We are working to address the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 13, 16:28 UTC Update - We are working to address additional resource constraints. We will provide an update within 20 minutes.Sep 13, 15:59 UTC Update - Our team is working to mitigate multiple capacity issues that are responsible for long wait times when allocating resources for remote docker and VM jobs. We will post an update within 20 minutes.Sep 13, 15:36 UTC Identified - Our team has identified the cause of the performance problem, and are working to mitigate the issue.Sep 13, 15:19 UTC Investigating - We are investigating an issue that may cause long allocation times for remote docker and VM jobs. We will provide an update within 20 minutes.

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Sep 14, 18:16 UTC Resolved - All jobs are being processed normally. Thank you for your patience.Sep 14, 17:56 UTC Update - We are continuing to monitor for any further issues.Sep 14, 17:56 UTC Monitoring - We are processing VM and docker machine jobs. We will continue to monitor the system for stability and performance, and will provide an update within 20 minutes.Sep 14, 17:35 UTC Update - We are working to restore service, and will update within 20 minutes.Sep 14, 17:17 UTC Update - We're working to address the underlying issues in vm provisioning. We may prematurely cancel or fail some jobs as we reduce pressure on the system. We will provide an update within 20 minutes.Sep 14, 16:57 UTC Update - We're working to address the underlying issues in vm provisioning. We may prematurely cancel or fail some jobs as we reduce pressure on the system. We will provide an update within 20 minutes.Sep 14, 16:50 UTC Update - Our team is working to address the backlog. We will provide an update within 20 minutes.Sep 14, 16:30 UTC Update - Our team is working to address the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 14, 16:08 UTC Identified - Some VM and remote docker jobs are experiencing longer than normal provisioning times. We are working to mitigate the problem.

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Sep 17, 16:31 UTC Resolved - Jobs are being processed normally, and are declaring this incident resolved. Thank you for your patienceSep 17, 16:10 UTC Monitoring - We have completed the maintenance and are processing jobs normally. We are monitoring the platform for performance and reliability and will post an update within 20 minutes.Sep 17, 15:57 UTC Update - We are conducting emergency maintenance to mitigate the long VM provisioning times for machine and remote docker jobs. We expect this to take approximately 30 minutes. Machine and remote docker jobs will not run during this time. We will provide an update within 30 minutes.Sep 17, 15:49 UTC Identified - Some VM and remote docker jobs are queueing. We are working to mitigate the problem and will provide an update within 20 minutes.

Last Update: A few months ago

Long provisioning times for VM and remote docker jobs

Sep 17, 22:23 UTC Resolved - We are marking the incident as resolved but realize that macOS builds will take a few hours to process the current backlog, as such that component will remain as degraded.Sep 17, 22:08 UTC Update - We are continuing to monitor for any further issues.Sep 17, 21:55 UTC Monitoring - We are seeing active provisioning of new VMs and the backlog is draining. We will continue to monitor the situation and update again in 30 minutes.Sep 17, 21:44 UTC Identified - We are scaling capacity to meet current recovery demands, some queueing is anticipated. We will update in 30 minutes.Sep 17, 21:09 UTC Monitoring - All jobs are executing normally. We will continue to monitor the stability and performance of the platform, and provide an update within 40 minutes.Sep 17, 20:53 UTC Update - VM, macOS 2.0 and remote docker jobs are not executing. Our engineers are working to mitigate the issue. We will provide an update within 20 minutes.Sep 17, 20:33 UTC Update - VM and remote docker jobs are not executing. Our engineers are working to mitigate the issue. We will provide an update within 20 minutes.Sep 17, 20:11 UTC Identified - A regression has been identified and we are rolling back to a known good version. We will provide an update within 20 minutes.Sep 17, 20:09 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: A few months ago

Remote Docker jobs experiencing delays

Sep 18, 16:54 UTC Resolved - This incident has been resolved.Sep 18, 16:33 UTC Monitoring - Service times have returned to normal levels. We will continue to monitor for 20 minutes.Sep 18, 16:11 UTC Investigating - Remote Docker jobs are currently experiencing one- to four-minute delays in allocation.

Last Update: A few months ago

Delays on macOS 2.0 builds

Sep 26, 23:47 UTC Resolved - This incident has been resolved.Sep 26, 15:06 UTC Identified - macOS 2.0 builds are currently experiencing longer-than-typical wait times.

Last Update: A few months ago

Long provisioning time for machine and remote docker jobs

Oct 1, 22:07 UTC Resolved - Incident resolved. Thank you for your patience.Oct 1, 21:45 UTC Monitoring - All jobs are being processed normally. We will continue to monitor the platform for stability and performance.Oct 1, 21:30 UTC Identified - We are experiencing long provisioning times for machine and remote docker jobs, and are working to mitigate the issue.

Last Update: A few months ago

CircleCI UI unavailable

Oct 2, 16:11 UTC Resolved - Service has been restored. Thank you for your patience.Oct 2, 16:10 UTC Update - We are continuing to monitor the platform for stability. We will provide an update within 20 minutes.Oct 2, 15:48 UTC Monitoring - We have deployed a change to fix the issue, and are monitoring the UI for availability. We will post an update within 20 minutes.bOct 2, 15:43 UTC Identified - We have identified the cause of the outage and are working to correct the problem. We will post an update within 20 minutes.Oct 2, 15:40 UTC Investigating - We are investigating an issue that may cause the UI to load incorrectly for some users

Last Update: A few months ago

Build queues

Oct 4, 10:09 UTC Resolved - Builds are now being processed normally. Thank you for your patience.Oct 4, 10:03 UTC Identified - We have identified the cause of the issue and are working to reduce the backlog of queued builds.Oct 4, 09:55 UTC Update - We are continuing to investigate this issue.Oct 4, 09:53 UTC Investigating - We're currently investigating a possible issue. We'll update as soon as we know more details.

Last Update: A few months ago

Elevated allocation times for Remote Docker jobs

Oct 12, 00:19 UTC Resolved - Allocation times for Remote Docker jobs remain stable at normal levels. Thank you for your patience.Oct 11, 23:59 UTC Monitoring - Allocation times for Remote Docker jobs have returned to normal levels. We will continue to monitor for twenty minutes.Oct 11, 23:53 UTC Investigating - We are investigating unusually high allocation times for Remote Docker jobs.

Last Update: A few months ago

Increased rate of macOS provisioning failures

Oct 16, 14:10 UTC Resolved - We have processed the backlog of jobs and macOS wait times have returned to normal. Thank you for your patience.Oct 16, 13:49 UTC Monitoring - We experienced a spike in macOS creation errors. The problem has been remedied, and we are now processing the backlog of jobs. We will post an update within 20 minutes.

Last Update: A few months ago

Some jobs failing with "Blocked due to plan-no-credits-available"

Oct 17, 00:06 UTC Resolved - We have seen no further occurrences of this error and are considering the incident resolved. Thank you for your patience.Oct 16, 23:52 UTC Monitoring - We have deployed a fix to this issue and are no longer seeing Jobs being blocked by this error.Oct 16, 23:44 UTC Update - The update has been deployed to fix this issue, we are confirming this now.Oct 16, 23:34 UTC Identified - We have identified an issue where some jobs will be incorrectly blocked with the message "Blocked due to plan-no-credits-available". We have identified the issue and are deploying a fix.

Last Update: A few months ago

Long provisioning times for macOS

Oct 17, 22:30 UTC Resolved - We are processing jobs normally.Oct 17, 22:14 UTC Update - We are still processing a backlog of macOS jobs for Xcode 9.0.0. We will provide an update within 40 minutes.Oct 17, 21:35 UTC Update - We are still processing a backlog of macOS jobs. We will provide an update within 40 minutes.Oct 17, 20:55 UTC Update - We are still processing a backlog of macOS jobs. We will provide an update within 40 minutes.Oct 17, 20:17 UTC Update - We are still processing a backlog of macOS jobs. We will provide an update within 40 minutes.Oct 17, 19:45 UTC Monitoring - We are processing a backlog of macOS jobs.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 03:23 PSTResolved - This incident has been resolved. Thank you for your patience.Jan 23, 03:03 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 23, 02:44 PSTUpdate - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 03:03 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 23, 02:44 PSTUpdate - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 02:44 PSTUpdate - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 21:20 PSTResolved - Circle 2.0 Jobs are running and we do not see any issues at this time. Thank you for your patience as we worked thru this incident.Jan 16, 21:05 PSTMonitoring - We have confirmed that Circle 2.0 Jobs are running. We will monitor the situation and update again in 10 minutes.Jan 16, 20:46 PSTUpdate - We are continuing to work on the issue and will update again in 20 minutes.Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 21:05 PSTMonitoring - We have confirmed that Circle 2.0 Jobs are running. We will monitor the situation and update again in 10 minutes.Jan 16, 20:46 PSTUpdate - We are continuing to work on the issue and will update again in 20 minutes.Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 20:46 PSTUpdate - We are continuing to work on the issue and will update again in 20 minutes.Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:44 PSTResolved - The change was successfully implemented, and the platform is now performing as expected.Jan 16, 08:24 PSTMonitoring - We have implemented a change to remediate the issue, and we continue to monitor the performance of the platform.Jan 16, 08:18 PSTIdentified - We have identified the issue, and we are taking steps to remedy the situation.Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:24 PSTMonitoring - We have implemented a change to remediate the issue, and we continue to monitor the performance of the platform.Jan 16, 08:18 PSTIdentified - We have identified the issue, and we are taking steps to remedy the situation.Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:18 PSTIdentified - We have identified the issue, and we are taking steps to remedy the situation.Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 04:04 PSTResolved - This incident has been resolved. Thank you for your patience.Jan 15, 03:05 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 15, 02:36 PSTUpdate - The processing of 2.0 builds has been resumed at partial capacity. Full capacity will be resumed shortly.Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 03:05 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 15, 02:36 PSTUpdate - The processing of 2.0 builds has been resumed at partial capacity. Full capacity will be resumed shortly.Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 02:36 PSTUpdate - The processing of 2.0 builds has been resumed at partial capacity. Full capacity will be resumed shortly.Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

CircleCI Database Updates

Jan 13, 00:58 PSTCompleted - The scheduled maintenance has been completed.Jan 13, 00:00 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 11:03 PSTScheduled - CircleCI will be performing upgrades required by AWS to our database servers. During this maintenance window, all new builds will be queued and certain UI features may be slow to respond. https://discuss.circleci.com/t/upcoming-scheduled-maintenance-13-january-2018-08-00-utc/19236

Last Update: A few months ago

CircleCI Database Updates

Jan 13, 00:00 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 11:03 PSTScheduled - CircleCI will be performing upgrades required by AWS to our database servers. During this maintenance window, all new builds will be queued and certain UI features may be slow to respond. https://discuss.circleci.com/t/upcoming-scheduled-maintenance-13-january-2018-08-00-utc/19236

Last Update: A few months ago

Trusty fleet build queue

Jan 11, 02:50 PSTResolved - We now have enough capacity in Trusty fleet.Jan 11, 02:36 PSTMonitoring - We have identified and fixed the root cause. More capacity in Trusty fleet is on it's way. We are closely monitoring until there is enough capacity.Jan 11, 02:10 PSTInvestigating - We are having an issue to scale up our Trusty fleet. We are currently investigating.

Last Update: A few months ago

Trusty fleet build queue

Jan 11, 02:36 PSTMonitoring - We have identified and fixed the root cause. More capacity in Trusty fleet is on it's way. We are closely monitoring until there is enough capacity.Jan 11, 02:10 PSTInvestigating - We are having an issue to scale up our Trusty fleet. We are currently investigating.

Last Update: A few months ago

Trusty fleet build queue

Jan 11, 02:10 PSTInvestigating - We are having an issue to scale up our Trusty fleet. We are currently investigating.

Last Update: A few months ago

CircleCI Database Updates

Jan 9, 11:03 PSTScheduled - CircleCI will be performing upgrades required by AWS to our database servers. During this maintenance window, all new builds will be queued and certain UI features may be slow to respond. https://discuss.circleci.com/t/upcoming-scheduled-maintenance-13-january-2018-08-00-utc/19236

Last Update: A few months ago

2.0 Build System Outage

Dec 22, 13:01 PSTResolved - The 2.0 build system is processing builds again. We thank you for your patience while dealing with this situation.Dec 22, 12:55 PSTUpdate - We have rolled out a fix to our system and are monitoring the system for any further errors. If your build failed you may need to manually retry the build via the retry button on the build page.Dec 22, 12:43 PSTIdentified - We have been alerted to, and identified the potential cause of an issue preventing new 2.0 builds from running correctly. We are rolling out a fix and will provide an update within 20 minutes.

Last Update: A few months ago

2.0 Build System Outage

Dec 22, 12:55 PSTUpdate - We have rolled out a fix to our system and are monitoring the system for any further errors. If your build failed you may need to manually retry the build via the retry button on the build page.Dec 22, 12:43 PSTIdentified - We have been alerted to, and identified the potential cause of an issue preventing new 2.0 builds from running correctly. We are rolling out a fix and will provide an update within 20 minutes.

Last Update: A few months ago

2.0 Build System Outage

Dec 22, 12:43 PSTIdentified - We have been alerted to, and identified the potential cause of an issue preventing new 2.0 builds from running correctly. We are rolling out a fix and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 14:47 PSTResolved - At this time the workflows infrastructure appears to be stable and we are considering this incident resolved. We appreciate your patience with this matter!Dec 13, 14:41 PSTMonitoring - We've identified the problem and successfully rolled out a fix. We are now monitoring the fix to ensure stability and thank you for your patience in this issue.Dec 13, 14:32 PSTIdentified - We are continuing to roll out a fix for this issue. We will provide an update once the fix has been deployed.Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 14:41 PSTMonitoring - We've identified the problem and successfully rolled out a fix. We are now monitoring the fix to ensure stability and thank you for your patience in this issue.Dec 13, 14:32 PSTIdentified - We are continuing to roll out a fix for this issue. We will provide an update once the fix has been deployed.Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 14:32 PSTIdentified - We are continuing to roll out a fix for this issue. We will provide an update once the fix has been deployed.Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Short 2.0 outage leading to some builds not running.

Dec 13, 06:49 PSTResolved - This incident has been resolved.Dec 13, 05:32 PSTInvestigating - We had a 5 minute 2.0 outage starting at 13:14 UTC , builds created during this window are not running. We are clearing the backlog, however cancelling and re-running the build/workflow will get them running again.

Last Update: A few months ago

Short 2.0 outage leading to some builds not running.

Dec 13, 05:32 PSTInvestigating - We had a 5 minute 2.0 outage starting at 13:14 UTC , builds created during this window are not running. We are clearing the backlog, however cancelling and re-running the build/workflow will get them running again.

Last Update: A few months ago

MacOS Network Maintenance

Dec 10, 04:30 PSTCompleted - The scheduled maintenance has been completed.Dec 9, 23:30 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Dec 6, 08:09 PSTScheduled - The colocation facility that hosts our MacOS build environment will be applying software updates to their network infrastructure. During this time there may be intermittent network disruptions which could cause some MacOS builds to fail.

Last Update: A few months ago

MacOS Network Maintenance

Dec 9, 23:30 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Dec 6, 08:09 PSTScheduled - The colocation facility that hosts our MacOS build environment will be applying software updates to their network infrastructure. During this time there may be intermittent network disruptions which could cause some MacOS builds to fail.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 16:53 PSTResolved - We have not seen any further occurrence and are marking this as resolvedDec 6, 16:27 PSTMonitoring - The CircleCI API web fleet is working cleanly and we are monitoring. We will report again in 20 minutes.Dec 6, 16:14 PSTIdentified - We have rolled back the change causing our CircleCI API to fail. We will update again in 20 minutes.Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 16:27 PSTMonitoring - The CircleCI API web fleet is working cleanly and we are monitoring. We will report again in 20 minutes.Dec 6, 16:14 PSTIdentified - We have rolled back the change causing our CircleCI API to fail. We will update again in 20 minutes.Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 16:14 PSTIdentified - We have rolled back the change causing our CircleCI API to fail. We will update again in 20 minutes.Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

MacOS Network Maintenance

Dec 6, 08:09 PSTScheduled - The colocation facility that hosts our MacOS build environment will be applying software updates to their network infrastructure. During this time there may be intermittent network disruptions which could cause some MacOS builds to fail.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:52 PSTResolved - The Workflows system is functioning correctly again. Thank you for your patience with this situation.Dec 5, 15:35 PSTMonitoring - We have implemented a fix and the Workflows system appears to be operational again. We are currently monitoring the Workflows system to ensure there are no further issues.Dec 5, 15:28 PSTIdentified - We're identified the cause of the outage and are working on implementing a resolution at this time.Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:35 PSTMonitoring - We have implemented a fix and the Workflows system appears to be operational again. We are currently monitoring the Workflows system to ensure there are no further issues.Dec 5, 15:28 PSTIdentified - We're identified the cause of the outage and are working on implementing a resolution at this time.Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:28 PSTIdentified - We're identified the cause of the outage and are working on implementing a resolution at this time.Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Context page is currently offline

Dec 5, 10:53 PSTResolved - The Contexts page appears to be working and stable again. Thank you for your patience while we corrected this problem.Dec 5, 10:36 PSTMonitoring - We've implemented a fix and the Contexts page appears to be functional again. We will continue to monitor this to ensure the incident is fully resolved.Dec 5, 10:20 PSTIdentified - The Contexts page of our UI is currently offline. We are working on restoring it and will provide an update in 20 minutes.

Last Update: A few months ago

Context page is currently offline

Dec 5, 10:36 PSTMonitoring - We've implemented a fix and the Contexts page appears to be functional again. We will continue to monitor this to ensure the incident is fully resolved.Dec 5, 10:20 PSTIdentified - The Contexts page of our UI is currently offline. We are working on restoring it and will provide an update in 20 minutes.

Last Update: A few months ago

Context page is currently offline

Dec 5, 10:20 PSTIdentified - The Contexts page of our UI is currently offline. We are working on restoring it and will provide an update in 20 minutes.

Last Update: A few months ago

CircleCI web UI unavailable

Nov 25, 13:39 PSTResolved - This incident has been resolved. Thank you for your patience.Nov 25, 13:19 PSTMonitoring - Access to the CircleCI web UI has been restored. We will continue to monitor for 20 minutes.Nov 25, 12:59 PSTInvestigating - The CircleCI web UI is currently unavailable. We are investigating.

Last Update: A few months ago

CircleCI web UI unavailable

Nov 25, 13:19 PSTMonitoring - Access to the CircleCI web UI has been restored. We will continue to monitor for 20 minutes.Nov 25, 12:59 PSTInvestigating - The CircleCI web UI is currently unavailable. We are investigating.

Last Update: A few months ago

CircleCI web UI unavailable

Nov 25, 12:59 PSTInvestigating - The CircleCI web UI is currently unavailable. We are investigating.

Last Update: A few months ago

Workflows unavailable

Nov 24, 12:54 PSTResolved - This incident has been resolved. Thank you for your patience.Nov 24, 12:33 PSTMonitoring - Workflows builds have been restored. We will continue to monitor for 20 minutes.Nov 24, 12:30 PSTIdentified - Workflows for CircleCI 2.0 is currently unavailable. The problem has been identified, and remediation efforts are underway.

Last Update: A few months ago

Workflows unavailable

Nov 24, 12:33 PSTMonitoring - Workflows builds have been restored. We will continue to monitor for 20 minutes.Nov 24, 12:30 PSTIdentified - Workflows for CircleCI 2.0 is currently unavailable. The problem has been identified, and remediation efforts are underway.

Last Update: A few months ago

Workflows unavailable

Nov 24, 12:30 PSTIdentified - Workflows for CircleCI 2.0 is currently unavailable. The problem has been identified, and remediation efforts are underway.

Last Update: A few months ago

CircleCI 2.0 VM Service

Nov 23, 08:32 PSTResolved - This incident has been resolved.Nov 23, 08:13 PSTMonitoring - We have scaled up to handle the queued VM requests and are monitoring the situation.Nov 23, 08:09 PSTInvestigating - macOS, machine, and remote docker builds are failing to allocate vms which will result in build failures. We will update in 20 mins.

Last Update: A few months ago

CircleCI 2.0 VM Service

Nov 23, 08:13 PSTMonitoring - We have scaled up to handle the queued VM requests and are monitoring the situation.Nov 23, 08:09 PSTInvestigating - macOS, machine, and remote docker builds are failing to allocate vms which will result in build failures. We will update in 20 mins.

Last Update: A few months ago

CircleCI 2.0 VM Service

Nov 23, 08:09 PSTInvestigating - macOS, machine, and remote docker builds are failing to allocate vms which will result in build failures. We will update in 20 mins.

Last Update: A few months ago

Switch Organization function broken

Nov 22, 17:46 PSTResolved - We have not seen any recurrence of the Switch Organization function being broken and are considering this resolved. Thank you for your patience.Nov 22, 17:23 PSTMonitoring - The fix for the Switch Organization function has deployed, we will continue to monitor for 20 minutes and then updateNov 22, 16:50 PSTIdentified - We are aware that the Switch Organization function is broken and a fix is being deployed. Will update again in 20 minutes.

Last Update: A few months ago

Switch Organization function broken

Nov 22, 17:23 PSTMonitoring - The fix for the Switch Organization function has deployed, we will continue to monitor for 20 minutes and then updateNov 22, 16:50 PSTIdentified - We are aware that the Switch Organization function is broken and a fix is being deployed. Will update again in 20 minutes.

Last Update: A few months ago

Switch Organization function broken

Nov 22, 16:50 PSTIdentified - We are aware that the Switch Organization function is broken and a fix is being deployed. Will update again in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 15:26 PSTResolved - We have not seen any recurrance of the Documentation or Marketing slow page loads and are closing this incident. Thank you for your patience.Nov 20, 14:41 PSTMonitoring - We have removed the vendor scripting that was causing the slow page loads and are monitoring. We will update again in 20 minutes.Nov 20, 14:25 PSTIdentified - The issue has been identified and a fix is being implemented.Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 14:41 PSTMonitoring - We have removed the vendor scripting that was causing the slow page loads and are monitoring. We will update again in 20 minutes.Nov 20, 14:25 PSTIdentified - The issue has been identified and a fix is being implemented.Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 14:25 PSTIdentified - The issue has been identified and a fix is being implemented.Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 13:25 PSTResolved - We have not seen any slow page loads or other page failures. Thank you for your patience as we worked to resolve this.Nov 16, 12:42 PSTMonitoring - The page load fix has been deployed and is stable. We will monitor for 30 minutes to confirm and update then.Nov 16, 12:02 PSTUpdate - Work continues to return the impacted service to full function. We will update again in 20 minutes.Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 12:42 PSTMonitoring - The page load fix has been deployed and is stable. We will monitor for 30 minutes to confirm and update then.Nov 16, 12:02 PSTUpdate - Work continues to return the impacted service to full function. We will update again in 20 minutes.Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 12:02 PSTUpdate - Work continues to return the impacted service to full function. We will update again in 20 minutes.Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 01:48 PSTResolved - Additional capacity is being brought online to prevent a recurrence of this problem. Thank you for your patience.Nov 9, 01:27 PSTMonitoring - Backlogged macOS 2.0 builds have drained. Next update in 20 minutes.Nov 9, 01:12 PSTUpdate - Backlogged macOS 2.0 builds are beginning to drain. Next update in 20 minutes.Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 01:27 PSTMonitoring - Backlogged macOS 2.0 builds have drained. Next update in 20 minutes.Nov 9, 01:12 PSTUpdate - Backlogged macOS 2.0 builds are beginning to drain. Next update in 20 minutes.Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 01:12 PSTUpdate - Backlogged macOS 2.0 builds are beginning to drain. Next update in 20 minutes.Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:56 PSTResolved - We have not observed any new issues with the Job Scheduler. Thank you for your patience as we work thru this.Nov 8, 15:33 PSTMonitoring - We have deployed a fix for the Scheduled Jobs service and are now monitoring. We will update again in 20 minutes.Nov 8, 15:17 PSTIdentified - We have identified why Scheduled Jobs are not running and are working to fix them. We will update in 20 minutes.Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:33 PSTMonitoring - We have deployed a fix for the Scheduled Jobs service and are now monitoring. We will update again in 20 minutes.Nov 8, 15:17 PSTIdentified - We have identified why Scheduled Jobs are not running and are working to fix them. We will update in 20 minutes.Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:17 PSTIdentified - We have identified why Scheduled Jobs are not running and are working to fix them. We will update in 20 minutes.Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI Project Settings page broken

Nov 6, 16:26 PSTResolved - This incident has been resolved.Nov 6, 15:51 PSTMonitoring - We have updated our UI code to fix the Project Settings page. We will monitor the situation and update again in 20 minutes.Nov 6, 15:05 PSTIdentified - We are aware that the Project Settings page is broken and are deploying a fix

Last Update: A few months ago

CircleCI Project Settings page broken

Nov 6, 15:51 PSTMonitoring - We have updated our UI code to fix the Project Settings page. We will monitor the situation and update again in 20 minutes.Nov 6, 15:05 PSTIdentified - We are aware that the Project Settings page is broken and are deploying a fix

Last Update: A few months ago

CircleCI Project Settings page broken

Nov 6, 15:05 PSTIdentified - We are aware that the Project Settings page is broken and are deploying a fix

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 09:47 PDT Resolved - No further queuing delays have been observed. Thank you for your patience.Nov 1, 09:26 PDT Monitoring - All faulty build executors have been withdrawn from service. New builds are executing without delay. We will continue to monitor for 20 minutes.Nov 1, 09:00 PDT Update - Remediation work continues. Faulty build executors are being withdrawn from the fleet. Next update in 20 minutes.Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 09:26 PDT Monitoring - All faulty build executors have been withdrawn from service. New builds are executing without delay. We will continue to monitor for 20 minutes.Nov 1, 09:00 PDT Update - Remediation work continues. Faulty build executors are being withdrawn from the fleet. Next update in 20 minutes.Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 09:00 PDT Update - Remediation work continues. Faulty build executors are being withdrawn from the fleet. Next update in 20 minutes.Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 06:41 PDT Resolved - No further problems have been observed. Thank you for your patience!Nov 1, 06:19 PDT Monitoring - Queued builds have been flushed and should be executing now.Nov 1, 06:14 PDT Update - Builds are now being restored.Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 06:19 PDT Monitoring - Queued builds have been flushed and should be executing now.Nov 1, 06:14 PDT Update - Builds are now being restored.Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 06:14 PDT Update - Builds are now being restored.Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 19:25 PDT Resolved - We have not seen any issues with CircleCI 2.0 jobs starting and are marking this incident as resolved - thank you for your patience!Oct 31, 19:05 PDT Monitoring - We have not seen any CircleCI 2.0 jobs fail to start. We will monitor for any recurrence but feel that this is now resolved.Oct 31, 18:33 PDT Update - The flow of CircleCI 2.0 jobs has resumed. We will update again when we have confirmed that all is back to normalOct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 19:05 PDT Monitoring - We have not seen any CircleCI 2.0 jobs fail to start. We will monitor for any recurrence but feel that this is now resolved.Oct 31, 18:33 PDT Update - The flow of CircleCI 2.0 jobs has resumed. We will update again when we have confirmed that all is back to normalOct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 18:33 PDT Update - The flow of CircleCI 2.0 jobs has resumed. We will update again when we have confirmed that all is back to normalOct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 08:57 PDT Resolved - Latencies pulling images from Docker Hub have returned to normal.Oct 26, 05:59 PDT Identified - For incident updates from Docker Hub, see - https://tinyurl.com/ycuv3cz5Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 05:59 PDT Identified - For incident updates from Docker Hub, see - https://tinyurl.com/ycuv3cz5Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 05:59 PDT Identified - For incident updates from Docker Hub, see - https://tinyurl.com/ycuv3cz5Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

CircleCI 1.0 build failures

Oct 18, 09:21 PDT Resolved - This issue has been resolved. Thank you for your patience.Oct 18, 09:04 PDT Monitoring - A fix has been implemented. We will continue to monitor for 15 minutes to ensure the problem has been resolved.Oct 18, 08:44 PDT Identified - We are investigating apt package manager failures on CircleCI 1.0. Next update in 20 minutes.

Last Update: A few months ago

CircleCI 1.0 build failures

Oct 18, 09:04 PDT Monitoring - A fix has been implemented. We will continue to monitor for 15 minutes to ensure the problem has been resolved.Oct 18, 08:44 PDT Identified - We are investigating apt package manager failures on CircleCI 1.0. Next update in 20 minutes.

Last Update: A few months ago

CircleCI 1.0 build failures

Oct 18, 08:44 PDT Identified - We are investigating apt package manager failures on CircleCI 1.0. Next update in 20 minutes.

Last Update: A few months ago

CircleCI 2.0 build start queue

Oct 16, 18:14 PDT Resolved - CircleCI 2.0 jobs are building normally, thank you for your patience as we resolved this issueOct 16, 17:50 PDT Monitoring - CircleCI 2.0 jobs are running normally, we will monitor the system to make sure things are back to normalOct 16, 17:27 PDT Investigating - We are exploring why some CircleCI 2.0 builds are failing to start

Last Update: A few months ago

CircleCI 2.0 build start queue

Oct 16, 17:50 PDT Monitoring - CircleCI 2.0 jobs are running normally, we will monitor the system to make sure things are back to normalOct 16, 17:27 PDT Investigating - We are exploring why some CircleCI 2.0 builds are failing to start

Last Update: A few months ago

CircleCI 2.0 build start queue

Oct 16, 17:27 PDT Investigating - We are exploring why some CircleCI 2.0 builds are failing to start

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:49 PDT Resolved - We are not seeing this issue continue, thank you for your patience.Oct 4, 18:28 PDT Monitoring - CircleCI 2.0 Builds are flowing, we will monitor for 15 minutes to ensure everything is ok.Oct 4, 18:24 PDT Identified - The issue has been identified and a fix is being implemented.Oct 4, 18:24 PDT Update - We are exploring what is preventing 2.0 builds from starting.Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:28 PDT Monitoring - CircleCI 2.0 Builds are flowing, we will monitor for 15 minutes to ensure everything is ok.Oct 4, 18:24 PDT Identified - The issue has been identified and a fix is being implemented.Oct 4, 18:24 PDT Update - We are exploring what is preventing 2.0 builds from starting.Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:24 PDT Identified - The issue has been identified and a fix is being implemented.Oct 4, 18:24 PDT Update - We are exploring what is preventing 2.0 builds from starting.Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 12:52 PDT Resolved - The CircleCI web error fix has been stable, thank you for your patience as we worked to fix CircleCI.Oct 4, 12:36 PDT Monitoring - The fix has been deployed and confirmed, we will monitor for another 15 minutes and update again.Oct 4, 12:14 PDT Update - We are rolling out the fix to our web servers, will update again when we have confirmed the fix.Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 12:36 PDT Monitoring - The fix has been deployed and confirmed, we will monitor for another 15 minutes and update again.Oct 4, 12:14 PDT Update - We are rolling out the fix to our web servers, will update again when we have confirmed the fix.Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 12:14 PDT Update - We are rolling out the fix to our web servers, will update again when we have confirmed the fix.Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Issues with Workflows

Sep 22, 16:58 PDT Resolved - We've identified and fixed the Workflows issue. Please reach out to support@circleci.com if you're still having any issues. Thanks!Sep 22, 16:32 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 22, 16:15 PDT Investigating - We're currently investigating an issue with Workflows and the Workflows UI. We will update again in 30 minutes.

Last Update: A few months ago

Issues with Workflows

Sep 22, 16:32 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 22, 16:15 PDT Investigating - We're currently investigating an issue with Workflows and the Workflows UI. We will update again in 30 minutes.

Last Update: A few months ago

Issues with Workflows

Sep 22, 16:15 PDT Investigating - We're currently investigating an issue with Workflows and the Workflows UI. We will update again in 30 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 08:54 PDT Resolved - The 2.0 build system is again fully functional. If you experience any problems please reach out to our support team.Sep 20, 08:39 PDT Monitoring - We are seeing improvement in DNS resolution and are monitoring the recovery of our systems.Sep 20, 07:24 PDT Identified - Pulling docker images from dockerhub is currently being impacted by DNS lookup failures of docker.io which we believe to be related to the current global DNS incident. Builds are being automatically retried to work around these DNS failures.Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 08:39 PDT Monitoring - We are seeing improvement in DNS resolution and are monitoring the recovery of our systems.Sep 20, 07:24 PDT Identified - Pulling docker images from dockerhub is currently being impacted by DNS lookup failures of docker.io which we believe to be related to the current global DNS incident. Builds are being automatically retried to work around these DNS failures.Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 07:24 PDT Identified - Pulling docker images from dockerhub is currently being impacted by DNS lookup failures of docker.io which we believe to be related to the current global DNS incident. Builds are being automatically retried to work around these DNS failures.Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

CircleCI 2.0 Workflow issue

Sep 15, 16:12 PDT Resolved - We have not seen any issues with Workflows and are marking this as resolved. Thank you for your patience as we worked thru the issue.Sep 15, 15:51 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 15, 15:51 PDT Identified - The CircleCI 2.0 feature Workflows had a brief outage of 10 minutes - if any workflow was triggered during that time it will not have started. We have identify the issue and it is fixed. We are monitoring to confirm the fix.

Last Update: A few months ago

CircleCI 2.0 Workflow issue

Sep 15, 15:51 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 15, 15:51 PDT Identified - The CircleCI 2.0 feature Workflows had a brief outage of 10 minutes - if any workflow was triggered during that time it will not have started. We have identify the issue and it is fixed. We are monitoring to confirm the fix.

Last Update: A few months ago

Issues with 2.0

Sep 14, 14:12 PDT Resolved - The issue has been resolved. CircleCI Services have returned to normal. Please contact support@circleci.com if you continue to experience any issues.Sep 14, 13:31 PDT Monitoring - The AWS S3 and Dockerhub issues have been resolved. CircleCI services are returning to normal operation. We will continue to monitor the situation closely. Next update in 30 minutes.Sep 14, 13:15 PDT Update - We remain impacted by the AWS S3 and Dockerhub issue. AWS indicates they've identified the issue and are working hard on implementing a fix. Our services will remain impacted until S3 and Dockerhub return to normal operations. We will update again in 30 minutes.Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 13:31 PDT Monitoring - The AWS S3 and Dockerhub issues have been resolved. CircleCI services are returning to normal operation. We will continue to monitor the situation closely. Next update in 30 minutes.Sep 14, 13:15 PDT Update - We remain impacted by the AWS S3 and Dockerhub issue. AWS indicates they've identified the issue and are working hard on implementing a fix. Our services will remain impacted until S3 and Dockerhub return to normal operations. We will update again in 30 minutes.Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 13:15 PDT Update - We remain impacted by the AWS S3 and Dockerhub issue. AWS indicates they've identified the issue and are working hard on implementing a fix. Our services will remain impacted until S3 and Dockerhub return to normal operations. We will update again in 30 minutes.Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:55 PDT Resolved - The 2.0 build system is functioning properly again. If you continue to experience any problems please contact our support department.Sep 14, 07:45 PDT Monitoring - A fix has been implemented and we are monitoring for any continuing problems.Sep 14, 07:32 PDT Identified - The issue has been identified and we are pushing out a fix.Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:45 PDT Monitoring - A fix has been implemented and we are monitoring for any continuing problems.Sep 14, 07:32 PDT Identified - The issue has been identified and we are pushing out a fix.Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:32 PDT Identified - The issue has been identified and we are pushing out a fix.Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 13:47 PDT Resolved - We have resolved the incident with the 2.0 Build System. Please don't hesitate to reach out to support@circleci.com if you experience any issues. Thank you for your patience while we resolved the issue.Sep 12, 13:23 PDT Monitoring - We have implemented a solution and will be closely monitoring the 2.0 Build System to ensure platform stability.Sep 12, 12:48 PDT Update - Our engineers continue to work on bringing the 2.0 Build System back to full capacity. Next update again in 30 minutes.Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 13:23 PDT Monitoring - We have implemented a solution and will be closely monitoring the 2.0 Build System to ensure platform stability.Sep 12, 12:48 PDT Update - Our engineers continue to work on bringing the 2.0 Build System back to full capacity. Next update again in 30 minutes.Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 12:48 PDT Update - Our engineers continue to work on bringing the 2.0 Build System back to full capacity. Next update again in 30 minutes.Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 08:08 PDT Resolved - This incident is now resolved. If you continue to see any problems please contact our support department.Sep 8, 08:01 PDT Monitoring - We have implemented a fix and are monitoring the 2.0 build system to ensure that it is stable.Sep 8, 07:49 PDT Update - We are still working on deploying a fix and will provide further information as it becomes available.Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 08:01 PDT Monitoring - We have implemented a fix and are monitoring the 2.0 build system to ensure that it is stable.Sep 8, 07:49 PDT Update - We are still working on deploying a fix and will provide further information as it becomes available.Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 07:49 PDT Update - We are still working on deploying a fix and will provide further information as it becomes available.Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 20:42 PDT Resolved - The 2.0 Build Cluster has been running jobs and we see no current issues. Thank you for your patience as we solved this issue.Sep 5, 19:58 PDT Monitoring - The backlog of jobs have been processed, we are monitoring the cluster and will update again in 20 minutes.Sep 5, 19:45 PDT Update - The 2.0 Build Cluster is back online and we are processing the backlog of jobs. We will update in 20 minutes.Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 19:58 PDT Monitoring - The backlog of jobs have been processed, we are monitoring the cluster and will update again in 20 minutes.Sep 5, 19:45 PDT Update - The 2.0 Build Cluster is back online and we are processing the backlog of jobs. We will update in 20 minutes.Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 19:45 PDT Update - The 2.0 Build Cluster is back online and we are processing the backlog of jobs. We will update in 20 minutes.Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

Missing tabs on build page

Aug 28, 22:27 PDT Resolved - This incident has been resolved.Aug 28, 22:02 PDT Monitoring - We've updated our frontend assets; tabs should be visible on build pages again. Please reach out to support@circleci.com if you're having any issues. Thanks for bearing with us!Aug 28, 21:16 PDT Identified - We are continuing to work on reverting frontend assets. Next update in 30 minutes.Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Missing tabs on build page

Aug 28, 22:02 PDT Monitoring - We've updated our frontend assets; tabs should be visible on build pages again. Please reach out to support@circleci.com if you're having any issues. Thanks for bearing with us!Aug 28, 21:16 PDT Identified - We are continuing to work on reverting frontend assets. Next update in 30 minutes.Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Missing tabs on build page

Aug 28, 21:16 PDT Identified - We are continuing to work on reverting frontend assets. Next update in 30 minutes.Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Missing tabs on build page

Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 17:05 PDT Resolved - This incident has been resolved.Aug 28, 15:26 PDT Monitoring - Our engineers have implemented a fix to the dashboard issue. Please refresh your browser or reopen the dashboard in a new tab. Reach out to support@circleci.com if you're still experiencing any issues. Thanks for your patience!Aug 28, 14:53 PDT Identified - We believe we have identified the issue with the web dashboard and are working on deploying an update. We will update again in 30 minutes with more information.Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 15:26 PDT Monitoring - Our engineers have implemented a fix to the dashboard issue. Please refresh your browser or reopen the dashboard in a new tab. Reach out to support@circleci.com if you're still experiencing any issues. Thanks for your patience!Aug 28, 14:53 PDT Identified - We believe we have identified the issue with the web dashboard and are working on deploying an update. We will update again in 30 minutes with more information.Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 14:53 PDT Identified - We believe we have identified the issue with the web dashboard and are working on deploying an update. We will update again in 30 minutes with more information.Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Some 2.0 builds are queued

Aug 23, 18:42 PDT Resolved - This incident has been resolved.Aug 23, 17:44 PDT Monitoring - We have identified the issue and deployed a fix. We are monitoring the services and will update again in 20 minutes.

Last Update: A few months ago

Some 2.0 builds are queued

Aug 23, 17:44 PDT Monitoring - We have identified the issue and deployed a fix. We are monitoring the services and will update again in 20 minutes.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 08:11 PDT Resolved - The GitHub API has recovered, we are receiving hooks at an expected rate, our UI and builds are operating normally.Aug 21, 07:35 PDT Monitoring - We are seeing recovery accessing the GitHub API, and are have begun receiving push hooks. Our UI is operational and builds are running.Aug 21, 07:03 PDT Identified - We continue to see connectivity issues with GitHub which are impacting both our user-interface and our ability to run builds for GitHub projects.Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 07:35 PDT Monitoring - We are seeing recovery accessing the GitHub API, and are have begun receiving push hooks. Our UI is operational and builds are running.Aug 21, 07:03 PDT Identified - We continue to see connectivity issues with GitHub which are impacting both our user-interface and our ability to run builds for GitHub projects.Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 07:03 PDT Identified - We continue to see connectivity issues with GitHub which are impacting both our user-interface and our ability to run builds for GitHub projects.Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 15:20 PDT Resolved - The issue has been resolved. We will continue to monitor the situation. Please don't hesitate to reach out to support@circleci.com if you experience any further issues. Thank you for your patience!Aug 15, 14:44 PDT Update - We are continuing to monitor the situation closely and will follow up with another update in 30 minutes.Aug 15, 14:10 PDT Monitoring - We have identified and fixed an internal issue. Additionally, AWS IAM has resumed service. Builds and the UI are both back to normal and we are monitoring closely. We will update again in 30 minutes.Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 14:44 PDT Update - We are continuing to monitor the situation closely and will follow up with another update in 30 minutes.Aug 15, 14:10 PDT Monitoring - We have identified and fixed an internal issue. Additionally, AWS IAM has resumed service. Builds and the UI are both back to normal and we are monitoring closely. We will update again in 30 minutes.Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 14:10 PDT Monitoring - We have identified and fixed an internal issue. Additionally, AWS IAM has resumed service. Builds and the UI are both back to normal and we are monitoring closely. We will update again in 30 minutes.Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Some 2.0 builds are queuing indefinitely

Aug 15, 07:26 PDT Resolved - This incident has been resolved.Aug 15, 06:55 PDT Identified - We have identified an issue causing some builds on 2.0 to queue indefinitely. We are currently fixing the issue.

Last Update: A few months ago

Some 2.0 builds are queuing indefinitely

Aug 15, 06:55 PDT Identified - We have identified an issue causing some builds on 2.0 to queue indefinitely. We are currently fixing the issue.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 12, 01:15 PDT Resolved - The fix has been deployed to production and users are able to see the project dashboard when they are not logged in.Aug 12, 00:14 PDT Update - We have shipped a fix and expect it to be fully rolled out to production soon. Will provide another update in 60 minutes.Aug 11, 23:54 PDT Identified - We have identified the root cause of this issue and are working on shipping a fix. Will provide an update in 20 minutes.Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 12, 00:14 PDT Update - We have shipped a fix and expect it to be fully rolled out to production soon. Will provide another update in 60 minutes.Aug 11, 23:54 PDT Identified - We have identified the root cause of this issue and are working on shipping a fix. Will provide an update in 20 minutes.Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 11, 23:54 PDT Identified - We have identified the root cause of this issue and are working on shipping a fix. Will provide an update in 20 minutes.Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 12:18 PDT Resolved - The support forum https://discuss.circleci.com/ is back at full operation.Aug 11, 12:02 PDT Monitoring - Engineers have enabled a repair and are monitoring the Discuss forum.Aug 11, 11:51 PDT Update - Engineers are continuing work on the resolution.Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 12:02 PDT Monitoring - Engineers have enabled a repair and are monitoring the Discuss forum.Aug 11, 11:51 PDT Update - Engineers are continuing work on the resolution.Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 11:51 PDT Update - Engineers are continuing work on the resolution.Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Workflows Service Database Upgrade