Circle-CI Status

2.0 builds queueing

Nov 15, 17:18 UTC Resolved - This incident has been resolved.Nov 15, 17:07 UTC Update - We are continuing to monitor for any further issues.Nov 15, 16:57 UTC Monitoring - We have identified and resolved the issue, queue time is returning to normal and we are monitoring our fleet.Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: About 1 days ago

2.0 builds queueing

Nov 15, 17:07 UTC Update - We are continuing to monitor for any further issues.Nov 15, 16:57 UTC Monitoring - We have identified and resolved the issue, queue time is returning to normal and we are monitoring our fleet.Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: About 1 days ago

2.0 builds queueing

Nov 15, 16:57 UTC Monitoring - We have identified and resolved the issue, queue time is returning to normal and we are monitoring our fleet.Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: About 1 days ago

2.0 builds queueing

Nov 15, 16:35 UTC Investigating - We are experiencing an issue placing builds on our infrastructure causing elevated queue times. We will provide an update in 20 minutes.

Last Update: About 1 days ago

Github Checks Status Updates

Oct 31, 19:13 UTC Resolved - We are no longer seeing any delay and thank you for your patience as we worked out this incident.Oct 31, 18:45 UTC Monitoring - We've identified the issue causing Github Checks to be delayed and have implemented a fix. We are now monitoring the delay and will provide continuing updates.Oct 31, 18:22 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 16 days ago

Github Checks Status Updates

Oct 31, 18:45 UTC Monitoring - We've identified the issue causing Github Checks to be delayed and have implemented a fix. We are now monitoring the delay and will provide continuing updates.Oct 31, 18:22 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 16 days ago

Github Checks Status Updates

Oct 31, 18:22 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 16 days ago

Github Checks Status Updates

Oct 31, 18:00 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 16 days ago

Github Checks Status Updates

Oct 31, 17:41 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 16 days ago

Github Checks Status Updates

Oct 31, 00:06 UTC Resolved - At this time we are considering this incident resolved and we thank you for your patience during this time.Oct 30, 23:38 UTC Update - The delay has been resolved however we are continuing to investigate this incident and will continue to monitor Github Checks.Oct 30, 23:28 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 23:38 UTC Update - The delay has been resolved however we are continuing to investigate this incident and will continue to monitor Github Checks.Oct 30, 23:28 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 23:28 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 23:04 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 22:39 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 22:18 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 21:58 UTC Update - We are continuing to investigate the cause of delayed Github Checks.Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

Github Checks Status Updates

Oct 30, 21:37 UTC Investigating - We are investigating a delay in the delivery of GitHub Checks Status notifications

Last Update: About 17 days ago

GitHub Checks Status updates

Oct 30, 20:08 UTC Resolved - GitHub Checks notifications are being delivered with no delay. Thank you for your patience.Oct 30, 19:44 UTC Monitoring - The delivery of GitHub Checks Status notifications is back to normal levelsOct 30, 19:26 UTC Investigating - We are currently investigating a delay in the delivery of GitHub Checks Status notifications.

Last Update: About 17 days ago

GitHub Checks Status updates

Oct 30, 19:44 UTC Monitoring - The delivery of GitHub Checks Status notifications is back to normal levelsOct 30, 19:26 UTC Investigating - We are currently investigating a delay in the delivery of GitHub Checks Status notifications.

Last Update: About 17 days ago

GitHub Checks Status updates

Oct 30, 19:26 UTC Investigating - We are currently investigating a delay in the delivery of GitHub Checks Status notifications.

Last Update: About 17 days ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 16:11 UTC Resolved - The Docker Hub issues have been resolved, we are no longer seeing issues pushing or pulling from Docker Hub in CircleCI jobs.Oct 30, 15:46 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5bd82da144d76c04b854c3a4

Last Update: About 17 days ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 15:46 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5bd82da144d76c04b854c3a4

Last Update: About 17 days ago

Long provisioning times for VM and remote docker jobs

Oct 30, 14:29 UTC Resolved - Provisioning times remain stable at normal levels. Thank you for your patience.Oct 30, 13:55 UTC Monitoring - Provisioning times have returned to normal levels. We will continue to monitor for twenty minutes.Oct 30, 13:30 UTC Identified - We have isolated the fault to a database and are working to mitigate the problem.Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: About 17 days ago

Long provisioning times for VM and remote docker jobs

Oct 30, 13:55 UTC Monitoring - Provisioning times have returned to normal levels. We will continue to monitor for twenty minutes.Oct 30, 13:30 UTC Identified - We have isolated the fault to a database and are working to mitigate the problem.Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: About 17 days ago

Long provisioning times for VM and remote docker jobs

Oct 30, 13:30 UTC Identified - We have isolated the fault to a database and are working to mitigate the problem.Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: About 17 days ago

Long provisioning times for VM and remote docker jobs

Oct 30, 11:17 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: About 18 days ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 10:53 UTC Resolved - The Docker Hub issues have been resolved, we are no longer seeing issues pushing or pulling from Docker Hub in CircleCI jobs.Oct 30, 10:35 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/533c6539221ae15e3f000031 for up-to-date status information

Last Update: About 18 days ago

Elevated errors pushing to and pulling from Docker Hub Registry

Oct 30, 10:35 UTC Identified - Docker Hub Registry is experiencing elevated error rates pushing and pulling images. CircleCI 2.0 Jobs that pull images from Docker Hub which we have not already cached may fail to start.See https://status.docker.com/pages/533c6539221ae15e3f000031 for up-to-date status information

Last Update: About 18 days ago

Production Database Maintenance

Oct 27, 20:46 UTC Completed - We have completed all planned maintenance and would like to thank you for your patienceOct 27, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Oct 26, 20:19 UTC Scheduled - We will be working on each of our production databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: About 20 days ago

Production Database Maintenance

Oct 27, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Oct 26, 20:19 UTC Scheduled - We will be working on each of our production databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: About 20 days ago

Production Database Maintenance

THIS IS A SCHEDULED EVENT Oct 27, 18:00 - 22:00 UTC Oct 26, 20:19 UTC Scheduled - We will be working on each of our production databases and while we do not anticipate any significant downtime we want you to be aware of this event.

Last Update: About 21 days ago

GitHub Outage

Oct 22, 23:25 UTC Resolved - GitHub has resolved the issue, we are no longer seeing any abnormal load and all queues are normal. Thank you for your patience as we worked thru this set of events.Oct 22, 22:27 UTC Update - Per GitHub: Webhook deliveries have caught up. We will continue to monitor and maintain capacity as we work thru the backlog of jobsOct 22, 22:03 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 22:27 UTC Update - Per GitHub: Webhook deliveries have caught up. We will continue to monitor and maintain capacity as we work thru the backlog of jobsOct 22, 22:03 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 22:03 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 21:33 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlogOct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 20:54 UTC Update - We are continuing to process webhooks as we receive them and are meeting current demand. Some fleets will continue to see a backlog for a while.Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 20:22 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 19:53 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the continued demand.Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 19:19 UTC Update - We are continuing to process webhooks as we receive them and have scaled to meet the demand.Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 18:36 UTC Update - We are continuing to process jobs as then are pushed form GitHub - be aware that our macOS fleet is going to be at capacity.Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 18:25 UTC Update - We have seen inbound hooks flowing into our system and we are monitoring to ensure that we have capacity to meet the demand.Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 17:40 UTC Update - Per GitHub: We have temporarily paused delivery of webhooks while we address an issue. We are working to resume delivery as soon as possible.Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 16:46 UTC Update - Per GitHub: We have resumed delivery of webhooks and will continue to monitor as we process a delayed backlog of events.Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 16:31 UTC Update - From GitHub -- We've completed validation of data consistency and have enabled some background jobs. We're continuing to monitor as the system recovers and expect to resume delivering webhooks at 16:45UTC .Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

GitHub Outage

Oct 22, 16:15 UTC Monitoring - At 22:52 UTC on 21 October (15:52 PDT ), GitHub experienced a network partition and subsequent database failure. This has caused intermittent issues with webhook delivery and other events that CircleCI depends on to manage your CircleCI workflows and jobs. The downtime has also prevented us from making API calls to GitHub to check on authorization and project/organization status.Until GitHub has ended their outage, we will be unable to know fully what changes or issues this has caused with your projects or jobs within our system. Furthermore, when GitHub does start delivering webhooks again, we will see a surge of jobs starting, and we will immediately scale in response and remain overprovisioned until the surge is complete.CircleCI Discuss: https://discuss.circleci.com/t/github-outage-on-21-october-2018/25903

Last Update: About 25 days ago

CircleCI site-wide outage

Jul 19, 22:44 UTC Resolved - All of our operations have returned to normal levels, thank you again for your patience as we worked thru this issue.Jul 19, 22:17 UTC Update - We are back to normal operations but we will monitor for another 20 minutes. Thank you for your patience as we worked thru this.Jul 19, 22:15 UTC Update - We are continuing to monitor for any further issues.Jul 19, 21:57 UTC Update - We are continuing to monitor as we work thru our remaining backlog during the recovery.Jul 19, 21:34 UTC Update - We are continuing to monitor and work through remediation efforts as our systems recover.Jul 19, 21:13 UTC Monitoring - We're continuing to monitor and work on further remediation efforts as our systems recover.Jul 19, 21:06 UTC Update - We are continuing to work on restoring normal service.Jul 19, 20:44 UTC Update - We are continuing to work on restoring normal service.Jul 19, 20:23 UTC Update - We've restored partial service and some builds are now running in our systems. We expect builds to be degraded throughout the day as we process our backlog.Jul 19, 20:00 UTC Update - We are continuing to work on restoring builds.Jul 19, 19:39 UTC Update - We are continuing to work on restoring builds.Jul 19, 19:16 UTC Update - We are continuing to work on restoring builds, we appreciate your patience as we work through this.Jul 19, 18:50 UTC Update - We are continuing to work on restoring buildsJul 19, 18:28 UTC Update - We are continuing to work on a fix for this issue. The CircleCI website and UI are partially up, and we are working to restore builds.Jul 19, 17:59 UTC Update - We are continuing to work on fixing the underlying issue. One of our database clusters is unavailable at the moment, and we are working to restore service. We expect builds to be degraded throughout the day as we bring services back up and process our backlog.Jul 19, 17:40 UTC Update - We are continuing to work on a fix for this issue.Jul 19, 17:20 UTC Update - We are continuing to work on a fix for this issue.Jul 19, 16:59 UTC Identified - We have identified the source of the issue and are working on rolling out a fixJul 19, 16:56 UTC Investigating - We are currently investigating this issue

Last Update: About 26 days ago

Workflows UI Degraded Performance

Jul 25, 15:30 UTC Resolved - The Workflows UI suffered an issue where any workflow that has a pending approval step would fail to load. We have corrected the issue and Workflows should properly load again. Started: 14:35 UTC Identified: 15:12 UTC Resolved: 15:27 UTC

Last Update: About 26 days ago

Degradation of service for Github hosted projects

Aug 1, 16:56 UTC Resolved - This incident has been resolved.Aug 1, 16:31 UTC Monitoring - Github API response rates have returned to normal. We are continuing to monitor status.Aug 1, 16:19 UTC Identified - Github API responses are continuing to experience elevated failure rates, Github is currently reporting degraded performance. We will continue to update every 20 minutes.Aug 1, 15:55 UTC Investigating - We are observing increased failure rates in API calls to GitHub, we will update again in 20 minutes.

Last Update: About 26 days ago

Unauthorized Status on Workflows utilizing contexts

Aug 1, 17:00 UTC Resolved - What happened?CircleCI is working on a feature to add secure permissions to contexts. As part of the rollout, we added permissions to all contexts. The permissions check did not handle some cases:- Customers who have never logged in to CircleCI and are using contexts- Customers who have scheduled workflowsIn these two cases, customers noticed “unauthorized” status on their workflows. The issues occurred between 19:00 and 21:00 UTC on 2018-07-31 and 14:00 and 17:00 UTC on 2018-08-01.What did we do?We have updated our services to accommodate for these cases. Our apologies for the inconvenience caused.

Last Update: About 26 days ago

Intermittent errors for Workflows UI

Aug 2, 00:04 UTC Resolved - Workflows UI error rates have remained below baseline. We are marking this incident as resolved.Aug 1, 23:40 UTC Monitoring - The error rate for Workflows UI has returned to baseline, we will monitor for 20 minutes to ensure it does not returnAug 1, 23:16 UTC Identified - We are monitoring an increase in Workflows UI errors

Last Update: About 26 days ago

MacOS Network Maintenance

Aug 5, 08:00 UTC Completed - The scheduled maintenance has been completed.Aug 5, 03:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 26, 15:46 UTC Scheduled - The colocation facility that hosts our macOS and macOS 2.0 build environments will be conducting network maintenance. During this window there will be intermittent service interruptions that may cause build failures or queuing. We apologize for any inconvenience, and thank you for your patience.

Last Update: About 26 days ago

Plan Settings UI

Aug 6, 23:51 UTC Resolved - We have not seen any signs of the issue remaining and are considering this as resolved. Thank you for your patience as we worked with our upstream provider to resolve this.Aug 6, 23:32 UTC Monitoring - We have coordinated a roll-back with our upstream provider and will monitor to ensure that the issue remains fixed.Aug 6, 22:50 UTC Update - We are working with our primary upstream provider to fix the permissions issue. A work-around is to change your Org visibility to Public. We will report back within 20 minutes.Aug 6, 22:11 UTC Update - We have identified that our permissions check system is showing elevated error rates from our primary upstream provider. A work-around is to change your Org visibility to Public. We are working with them to resolve the issue and will report back within 20 minutes.Aug 6, 21:38 UTC Identified - We have identified that our permissions check system is showing elevated error rates from our primary upstream provider. We are working with them to resolve the issue and will report back within 20 minutes.Aug 6, 21:12 UTC Update - We are looking into why some users have lost access to organizations they are a member of. We will update again in 20 minutes.Aug 6, 20:45 UTC Update - We are continuing to look into what is causing trouble with our Plans and Org settings UI. We will update again in 20 minutes.Aug 6, 20:20 UTC Investigating - Some organizations are currently unable to access their Plan Settings Pages. We are investigating the issue.

Last Update: About 26 days ago

Issues loading workflows UI

Aug 9, 22:03 UTC Postmortem - Yesterday we experienced an issue that caused workflows to be degraded for some customers for about an hour starting at 20:19 UTC , with service fully restored by 21:00 UTC .The event was triggered by a period of heightened packet loss and latency at an upstream provider. We began to notice increased API error rates within our system, which caused workflow jobs to not start. Once the upstream provider returned to normal activity, workflows began processing and we were able to clear the backlog, while continuing to closely monitor.Aug 7, 21:17 UTC Resolved - All services are stable. We would like to thank you for your patience. Incident resolved.Aug 7, 20:51 UTC Update - Jobs are being processed normally. We will continue to monitor our systems for stability.Aug 7, 20:36 UTC Monitoring - We're experiencing a network failure with an upstream provider and are monitoring the situation.Aug 7, 20:26 UTC Update - We are seeing increased API error rates on our front-end. Some jobs may experience queueing.Aug 7, 20:19 UTC Investigating - We are investigating an issue with the workflows UI.

Last Update: About 26 days ago

Contexts Outage

Aug 21, 17:11 UTC Resolved - This incident has been resolved now.Aug 21, 16:52 UTC Monitoring - We've implemented a fix and builds are succeeding again. Some delays on builds expected as we process the backlog. We are continuing to monitor systems for further issues.Aug 21, 16:31 UTC Identified - We have identified the cause of the issue and are working on rolling out a fix.Aug 21, 16:06 UTC Update - We are continuing to investigate this issue.Aug 21, 16:05 UTC Investigating - We are currently experiencing problems with the Contexts service and builds relying on this service will fail to run. We are investigating the cause of this issue and will provide further updates in 20 minutes.

Last Update: About 26 days ago

Planned Downtime on Docker Hub

Aug 25, 19:00 UTC Completed - The scheduled maintenance has been completed.Aug 25, 18:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 15, 05:29 UTC Scheduled - Container images will be unavailable for pulls from Docker Hub during this time. For more information, please see the upstream advisory -https://success.docker.com/article/planned-downtime-on-hub-cloud-store

Last Update: About 26 days ago

Slow API responses

Aug 31, 21:27 UTC Resolved - Our API response time has returned to normal, and our UI is loading correctly. Thank you for your patience.Aug 31, 21:14 UTC Monitoring - We are monitoring the UI for stability and performance, and will provide an update within 20 minutes.Aug 31, 21:06 UTC Investigating - We are investigating slow API responses which are impacting our UI

Last Update: About 26 days ago

Increased Bitbucket Permission Errors

Sep 4, 20:37 UTC Resolved - The issue has been resolved. Please contact support if you experience any further difficulty. Thank you for your patience.Sep 4, 20:17 UTC Monitoring - We have released an update that is expected to resolve the issue, and are monitoring the platform to confirm. We will provide an update within 20 minutes.Sep 4, 19:56 UTC Update - We are rolling an update that is expected to resolve the issue.Sep 4, 19:37 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 19:05 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 18:46 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 18:01 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 17:40 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 17:17 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 16:50 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 16:30 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 16:07 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 15:46 UTC Update - We are continuing to work on a fix for this issue. We will provide an update in 20 minutes.Sep 4, 15:25 UTC Update - We are continuing to work on a fix for this issue, we will provide further updates as they are available.Sep 4, 15:00 UTC Update - We are continuing to work on a fix for this issue.Sep 4, 14:00 UTC Update - We've deployed some partial changes and are continuing to investigate. We will provide further updates as they are available.Sep 4, 11:00 UTC Identified - We've identified an issue with how we handle BitBucket Rate Limiting and are working to resolve the issue. We'll provide an update soon.Sep 4, 10:34 UTC Investigating - We're investigating reports of users being unable to access their project and organisation settings, and other pages which require elevated permissions. We will provide an update shortly.

Last Update: About 26 days ago

Long allocation time for VM and remote docker jobs

Sep 13, 19:41 UTC Resolved - This incident has been resolved. Please contact support if you experience any further difficulty. Thank you for your patience.Sep 13, 19:20 UTC Update - We are continuing to monitor our platform performance and stability as we work through the builds backlog. We will provide an update within 20 minutes.Sep 13, 18:54 UTC Update - We are continuing to monitor systems for stability and performance. Some delays expected as we work through the builds backlog. We will provide an update within 20 minutes.Sep 13, 18:33 UTC Update - We continue to monitor the platform for stability and performance and will provide an update within 20 minutes.Sep 13, 18:13 UTC Update - We have processed the outstanding jobs and provisioning times have returned to acceptable values. We will continue to monitor provisioning performance, and will provide an update within 20 minutes.Sep 13, 17:49 UTC Monitoring - We have processed the outstanding jobs and provisioning times have returned to acceptable values. We will continue to monitor provisioning performance, and will provide an update within 20 minutes.Sep 13, 17:25 UTC Update - We continue to process the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 13, 17:07 UTC Update - We continue to process the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 13, 16:47 UTC Update - We are working to address the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 13, 16:28 UTC Update - We are working to address additional resource constraints. We will provide an update within 20 minutes.Sep 13, 15:59 UTC Update - Our team is working to mitigate multiple capacity issues that are responsible for long wait times when allocating resources for remote docker and VM jobs. We will post an update within 20 minutes.Sep 13, 15:36 UTC Identified - Our team has identified the cause of the performance problem, and are working to mitigate the issue.Sep 13, 15:19 UTC Investigating - We are investigating an issue that may cause long allocation times for remote docker and VM jobs. We will provide an update within 20 minutes.

Last Update: About 26 days ago

Long provisioning times for VM and remote docker jobs

Sep 14, 18:16 UTC Resolved - All jobs are being processed normally. Thank you for your patience.Sep 14, 17:56 UTC Update - We are continuing to monitor for any further issues.Sep 14, 17:56 UTC Monitoring - We are processing VM and docker machine jobs. We will continue to monitor the system for stability and performance, and will provide an update within 20 minutes.Sep 14, 17:35 UTC Update - We are working to restore service, and will update within 20 minutes.Sep 14, 17:17 UTC Update - We're working to address the underlying issues in vm provisioning. We may prematurely cancel or fail some jobs as we reduce pressure on the system. We will provide an update within 20 minutes.Sep 14, 16:57 UTC Update - We're working to address the underlying issues in vm provisioning. We may prematurely cancel or fail some jobs as we reduce pressure on the system. We will provide an update within 20 minutes.Sep 14, 16:50 UTC Update - Our team is working to address the backlog. We will provide an update within 20 minutes.Sep 14, 16:30 UTC Update - Our team is working to address the backlog of VM and remote docker jobs. We will provide an update within 20 minutes.Sep 14, 16:08 UTC Identified - Some VM and remote docker jobs are experiencing longer than normal provisioning times. We are working to mitigate the problem.

Last Update: About 26 days ago

Long provisioning times for VM and remote docker jobs

Sep 17, 16:31 UTC Resolved - Jobs are being processed normally, and are declaring this incident resolved. Thank you for your patienceSep 17, 16:10 UTC Monitoring - We have completed the maintenance and are processing jobs normally. We are monitoring the platform for performance and reliability and will post an update within 20 minutes.Sep 17, 15:57 UTC Update - We are conducting emergency maintenance to mitigate the long VM provisioning times for machine and remote docker jobs. We expect this to take approximately 30 minutes. Machine and remote docker jobs will not run during this time. We will provide an update within 30 minutes.Sep 17, 15:49 UTC Identified - Some VM and remote docker jobs are queueing. We are working to mitigate the problem and will provide an update within 20 minutes.

Last Update: About 26 days ago

Long provisioning times for VM and remote docker jobs

Sep 17, 22:23 UTC Resolved - We are marking the incident as resolved but realize that macOS builds will take a few hours to process the current backlog, as such that component will remain as degraded.Sep 17, 22:08 UTC Update - We are continuing to monitor for any further issues.Sep 17, 21:55 UTC Monitoring - We are seeing active provisioning of new VMs and the backlog is draining. We will continue to monitor the situation and update again in 30 minutes.Sep 17, 21:44 UTC Identified - We are scaling capacity to meet current recovery demands, some queueing is anticipated. We will update in 30 minutes.Sep 17, 21:09 UTC Monitoring - All jobs are executing normally. We will continue to monitor the stability and performance of the platform, and provide an update within 40 minutes.Sep 17, 20:53 UTC Update - VM, macOS 2.0 and remote docker jobs are not executing. Our engineers are working to mitigate the issue. We will provide an update within 20 minutes.Sep 17, 20:33 UTC Update - VM and remote docker jobs are not executing. Our engineers are working to mitigate the issue. We will provide an update within 20 minutes.Sep 17, 20:11 UTC Identified - A regression has been identified and we are rolling back to a known good version. We will provide an update within 20 minutes.Sep 17, 20:09 UTC Investigating - Some VM and remote docker jobs are not being executed in a timely manner. We are investigating.

Last Update: About 26 days ago

Remote Docker jobs experiencing delays

Sep 18, 16:54 UTC Resolved - This incident has been resolved.Sep 18, 16:33 UTC Monitoring - Service times have returned to normal levels. We will continue to monitor for 20 minutes.Sep 18, 16:11 UTC Investigating - Remote Docker jobs are currently experiencing one- to four-minute delays in allocation.

Last Update: About 26 days ago

Delays on macOS 2.0 builds

Sep 26, 23:47 UTC Resolved - This incident has been resolved.Sep 26, 15:06 UTC Identified - macOS 2.0 builds are currently experiencing longer-than-typical wait times.

Last Update: About 26 days ago

Long provisioning time for machine and remote docker jobs

Oct 1, 22:07 UTC Resolved - Incident resolved. Thank you for your patience.Oct 1, 21:45 UTC Monitoring - All jobs are being processed normally. We will continue to monitor the platform for stability and performance.Oct 1, 21:30 UTC Identified - We are experiencing long provisioning times for machine and remote docker jobs, and are working to mitigate the issue.

Last Update: About 26 days ago

CircleCI UI unavailable

Oct 2, 16:11 UTC Resolved - Service has been restored. Thank you for your patience.Oct 2, 16:10 UTC Update - We are continuing to monitor the platform for stability. We will provide an update within 20 minutes.Oct 2, 15:48 UTC Monitoring - We have deployed a change to fix the issue, and are monitoring the UI for availability. We will post an update within 20 minutes.bOct 2, 15:43 UTC Identified - We have identified the cause of the outage and are working to correct the problem. We will post an update within 20 minutes.Oct 2, 15:40 UTC Investigating - We are investigating an issue that may cause the UI to load incorrectly for some users

Last Update: About 26 days ago

Build queues

Oct 4, 10:09 UTC Resolved - Builds are now being processed normally. Thank you for your patience.Oct 4, 10:03 UTC Identified - We have identified the cause of the issue and are working to reduce the backlog of queued builds.Oct 4, 09:55 UTC Update - We are continuing to investigate this issue.Oct 4, 09:53 UTC Investigating - We're currently investigating a possible issue. We'll update as soon as we know more details.

Last Update: About 26 days ago

Elevated allocation times for Remote Docker jobs

Oct 12, 00:19 UTC Resolved - Allocation times for Remote Docker jobs remain stable at normal levels. Thank you for your patience.Oct 11, 23:59 UTC Monitoring - Allocation times for Remote Docker jobs have returned to normal levels. We will continue to monitor for twenty minutes.Oct 11, 23:53 UTC Investigating - We are investigating unusually high allocation times for Remote Docker jobs.

Last Update: About 26 days ago

Increased rate of macOS provisioning failures

Oct 16, 14:10 UTC Resolved - We have processed the backlog of jobs and macOS wait times have returned to normal. Thank you for your patience.Oct 16, 13:49 UTC Monitoring - We experienced a spike in macOS creation errors. The problem has been remedied, and we are now processing the backlog of jobs. We will post an update within 20 minutes.

Last Update: About 26 days ago

Some jobs failing with "Blocked due to plan-no-credits-available"

Oct 17, 00:06 UTC Resolved - We have seen no further occurrences of this error and are considering the incident resolved. Thank you for your patience.Oct 16, 23:52 UTC Monitoring - We have deployed a fix to this issue and are no longer seeing Jobs being blocked by this error.Oct 16, 23:44 UTC Update - The update has been deployed to fix this issue, we are confirming this now.Oct 16, 23:34 UTC Identified - We have identified an issue where some jobs will be incorrectly blocked with the message "Blocked due to plan-no-credits-available". We have identified the issue and are deploying a fix.

Last Update: About 26 days ago

Long provisioning times for macOS

Oct 17, 22:30 UTC Resolved - We are processing jobs normally.Oct 17, 22:14 UTC Update - We are still processing a backlog of macOS jobs for Xcode 9.0.0. We will provide an update within 40 minutes.Oct 17, 21:35 UTC Update - We are still processing a backlog of macOS jobs. We will provide an update within 40 minutes.Oct 17, 20:55 UTC Update - We are still processing a backlog of macOS jobs. We will provide an update within 40 minutes.Oct 17, 20:17 UTC Update - We are still processing a backlog of macOS jobs. We will provide an update within 40 minutes.Oct 17, 19:45 UTC Monitoring - We are processing a backlog of macOS jobs.

Last Update: About 26 days ago

Circle 2.0 jobs queued

Jan 23, 03:23 PSTResolved - This incident has been resolved. Thank you for your patience.Jan 23, 03:03 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 23, 02:44 PSTUpdate - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 03:03 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 23, 02:44 PSTUpdate - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 02:44 PSTUpdate - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 23, 01:43 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 21:20 PSTResolved - Circle 2.0 Jobs are running and we do not see any issues at this time. Thank you for your patience as we worked thru this incident.Jan 16, 21:05 PSTMonitoring - We have confirmed that Circle 2.0 Jobs are running. We will monitor the situation and update again in 10 minutes.Jan 16, 20:46 PSTUpdate - We are continuing to work on the issue and will update again in 20 minutes.Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 21:05 PSTMonitoring - We have confirmed that Circle 2.0 Jobs are running. We will monitor the situation and update again in 10 minutes.Jan 16, 20:46 PSTUpdate - We are continuing to work on the issue and will update again in 20 minutes.Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 20:46 PSTUpdate - We are continuing to work on the issue and will update again in 20 minutes.Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 20:23 PSTIdentified - We have identified the cause of the Circle 2.0 Jobs not running and are working on a fix. We will update again in 20 minutes.Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 Jobs are not running

Jan 16, 20:18 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:44 PSTResolved - The change was successfully implemented, and the platform is now performing as expected.Jan 16, 08:24 PSTMonitoring - We have implemented a change to remediate the issue, and we continue to monitor the performance of the platform.Jan 16, 08:18 PSTIdentified - We have identified the issue, and we are taking steps to remedy the situation.Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:24 PSTMonitoring - We have implemented a change to remediate the issue, and we continue to monitor the performance of the platform.Jan 16, 08:18 PSTIdentified - We have identified the issue, and we are taking steps to remedy the situation.Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:18 PSTIdentified - We have identified the issue, and we are taking steps to remedy the situation.Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs are being dispatched slowly

Jan 16, 08:05 PSTInvestigating - We are investigating an issue that causes 2.0 jobs to be dispatched slowly. An update will be posted in 20 minutes.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 04:04 PSTResolved - This incident has been resolved. Thank you for your patience.Jan 15, 03:05 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 15, 02:36 PSTUpdate - The processing of 2.0 builds has been resumed at partial capacity. Full capacity will be resumed shortly.Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 03:05 PSTMonitoring - The build backlog has been cleared. Full capacity has been restored. We will continue to monitor for 20 minutes.Jan 15, 02:36 PSTUpdate - The processing of 2.0 builds has been resumed at partial capacity. Full capacity will be resumed shortly.Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 02:36 PSTUpdate - The processing of 2.0 builds has been resumed at partial capacity. Full capacity will be resumed shortly.Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 02:12 PSTIdentified - The processing of 2.0 builds have been temporarily paused while corrective measures are put in place.Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

Circle 2.0 jobs queued

Jan 15, 01:20 PSTInvestigating - Circle 2.0 jobs are spending a few minutes sitting in queues. We are investigating.

Last Update: A few months ago

CircleCI Database Updates

Jan 13, 00:58 PSTCompleted - The scheduled maintenance has been completed.Jan 13, 00:00 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 11:03 PSTScheduled - CircleCI will be performing upgrades required by AWS to our database servers. During this maintenance window, all new builds will be queued and certain UI features may be slow to respond. https://discuss.circleci.com/t/upcoming-scheduled-maintenance-13-january-2018-08-00-utc/19236

Last Update: A few months ago

CircleCI Database Updates

Jan 13, 00:00 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 11:03 PSTScheduled - CircleCI will be performing upgrades required by AWS to our database servers. During this maintenance window, all new builds will be queued and certain UI features may be slow to respond. https://discuss.circleci.com/t/upcoming-scheduled-maintenance-13-january-2018-08-00-utc/19236

Last Update: A few months ago

Trusty fleet build queue

Jan 11, 02:50 PSTResolved - We now have enough capacity in Trusty fleet.Jan 11, 02:36 PSTMonitoring - We have identified and fixed the root cause. More capacity in Trusty fleet is on it's way. We are closely monitoring until there is enough capacity.Jan 11, 02:10 PSTInvestigating - We are having an issue to scale up our Trusty fleet. We are currently investigating.

Last Update: A few months ago

Trusty fleet build queue

Jan 11, 02:36 PSTMonitoring - We have identified and fixed the root cause. More capacity in Trusty fleet is on it's way. We are closely monitoring until there is enough capacity.Jan 11, 02:10 PSTInvestigating - We are having an issue to scale up our Trusty fleet. We are currently investigating.

Last Update: A few months ago

Trusty fleet build queue

Jan 11, 02:10 PSTInvestigating - We are having an issue to scale up our Trusty fleet. We are currently investigating.

Last Update: A few months ago

CircleCI Database Updates

Jan 9, 11:03 PSTScheduled - CircleCI will be performing upgrades required by AWS to our database servers. During this maintenance window, all new builds will be queued and certain UI features may be slow to respond. https://discuss.circleci.com/t/upcoming-scheduled-maintenance-13-january-2018-08-00-utc/19236

Last Update: A few months ago

2.0 Build System Outage

Dec 22, 13:01 PSTResolved - The 2.0 build system is processing builds again. We thank you for your patience while dealing with this situation.Dec 22, 12:55 PSTUpdate - We have rolled out a fix to our system and are monitoring the system for any further errors. If your build failed you may need to manually retry the build via the retry button on the build page.Dec 22, 12:43 PSTIdentified - We have been alerted to, and identified the potential cause of an issue preventing new 2.0 builds from running correctly. We are rolling out a fix and will provide an update within 20 minutes.

Last Update: A few months ago

2.0 Build System Outage

Dec 22, 12:55 PSTUpdate - We have rolled out a fix to our system and are monitoring the system for any further errors. If your build failed you may need to manually retry the build via the retry button on the build page.Dec 22, 12:43 PSTIdentified - We have been alerted to, and identified the potential cause of an issue preventing new 2.0 builds from running correctly. We are rolling out a fix and will provide an update within 20 minutes.

Last Update: A few months ago

2.0 Build System Outage

Dec 22, 12:43 PSTIdentified - We have been alerted to, and identified the potential cause of an issue preventing new 2.0 builds from running correctly. We are rolling out a fix and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 14:47 PSTResolved - At this time the workflows infrastructure appears to be stable and we are considering this incident resolved. We appreciate your patience with this matter!Dec 13, 14:41 PSTMonitoring - We've identified the problem and successfully rolled out a fix. We are now monitoring the fix to ensure stability and thank you for your patience in this issue.Dec 13, 14:32 PSTIdentified - We are continuing to roll out a fix for this issue. We will provide an update once the fix has been deployed.Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 14:41 PSTMonitoring - We've identified the problem and successfully rolled out a fix. We are now monitoring the fix to ensure stability and thank you for your patience in this issue.Dec 13, 14:32 PSTIdentified - We are continuing to roll out a fix for this issue. We will provide an update once the fix has been deployed.Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 14:32 PSTIdentified - We are continuing to roll out a fix for this issue. We will provide an update once the fix has been deployed.Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 13:57 PSTUpdate - We are deploying a temporary fix at this time but are continuing to work on identifying, and correcting, the root cause of this issue.Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 13, 13:35 PSTInvestigating - We are seeing problems with adding new workflows projects. We are investigating the cause and will provide an update within 20 minutes.

Last Update: A few months ago

Short 2.0 outage leading to some builds not running.

Dec 13, 06:49 PSTResolved - This incident has been resolved.Dec 13, 05:32 PSTInvestigating - We had a 5 minute 2.0 outage starting at 13:14 UTC , builds created during this window are not running. We are clearing the backlog, however cancelling and re-running the build/workflow will get them running again.

Last Update: A few months ago

Short 2.0 outage leading to some builds not running.

Dec 13, 05:32 PSTInvestigating - We had a 5 minute 2.0 outage starting at 13:14 UTC , builds created during this window are not running. We are clearing the backlog, however cancelling and re-running the build/workflow will get them running again.

Last Update: A few months ago

MacOS Network Maintenance

Dec 10, 04:30 PSTCompleted - The scheduled maintenance has been completed.Dec 9, 23:30 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Dec 6, 08:09 PSTScheduled - The colocation facility that hosts our MacOS build environment will be applying software updates to their network infrastructure. During this time there may be intermittent network disruptions which could cause some MacOS builds to fail.

Last Update: A few months ago

MacOS Network Maintenance

Dec 9, 23:30 PSTIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Dec 6, 08:09 PSTScheduled - The colocation facility that hosts our MacOS build environment will be applying software updates to their network infrastructure. During this time there may be intermittent network disruptions which could cause some MacOS builds to fail.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 16:53 PSTResolved - We have not seen any further occurrence and are marking this as resolvedDec 6, 16:27 PSTMonitoring - The CircleCI API web fleet is working cleanly and we are monitoring. We will report again in 20 minutes.Dec 6, 16:14 PSTIdentified - We have rolled back the change causing our CircleCI API to fail. We will update again in 20 minutes.Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 16:27 PSTMonitoring - The CircleCI API web fleet is working cleanly and we are monitoring. We will report again in 20 minutes.Dec 6, 16:14 PSTIdentified - We have rolled back the change causing our CircleCI API to fail. We will update again in 20 minutes.Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 16:14 PSTIdentified - We have rolled back the change causing our CircleCI API to fail. We will update again in 20 minutes.Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

CircleCI API web access failure

Dec 6, 15:56 PSTInvestigating - We are looking into signs that our API and their web servers are having problems. We will update again in 20 minutes.

Last Update: A few months ago

MacOS Network Maintenance

Dec 6, 08:09 PSTScheduled - The colocation facility that hosts our MacOS build environment will be applying software updates to their network infrastructure. During this time there may be intermittent network disruptions which could cause some MacOS builds to fail.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:52 PSTResolved - The Workflows system is functioning correctly again. Thank you for your patience with this situation.Dec 5, 15:35 PSTMonitoring - We have implemented a fix and the Workflows system appears to be operational again. We are currently monitoring the Workflows system to ensure there are no further issues.Dec 5, 15:28 PSTIdentified - We're identified the cause of the outage and are working on implementing a resolution at this time.Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:35 PSTMonitoring - We have implemented a fix and the Workflows system appears to be operational again. We are currently monitoring the Workflows system to ensure there are no further issues.Dec 5, 15:28 PSTIdentified - We're identified the cause of the outage and are working on implementing a resolution at this time.Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:28 PSTIdentified - We're identified the cause of the outage and are working on implementing a resolution at this time.Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Workflows Outage

Dec 5, 15:24 PSTInvestigating - Our monitoring system has alerted us to an issue with Workflows. We are currently investigating and we will provide an update within 20 minutes.

Last Update: A few months ago

Context page is currently offline

Dec 5, 10:53 PSTResolved - The Contexts page appears to be working and stable again. Thank you for your patience while we corrected this problem.Dec 5, 10:36 PSTMonitoring - We've implemented a fix and the Contexts page appears to be functional again. We will continue to monitor this to ensure the incident is fully resolved.Dec 5, 10:20 PSTIdentified - The Contexts page of our UI is currently offline. We are working on restoring it and will provide an update in 20 minutes.

Last Update: A few months ago

Context page is currently offline

Dec 5, 10:36 PSTMonitoring - We've implemented a fix and the Contexts page appears to be functional again. We will continue to monitor this to ensure the incident is fully resolved.Dec 5, 10:20 PSTIdentified - The Contexts page of our UI is currently offline. We are working on restoring it and will provide an update in 20 minutes.

Last Update: A few months ago

Context page is currently offline

Dec 5, 10:20 PSTIdentified - The Contexts page of our UI is currently offline. We are working on restoring it and will provide an update in 20 minutes.

Last Update: A few months ago

CircleCI web UI unavailable

Nov 25, 13:39 PSTResolved - This incident has been resolved. Thank you for your patience.Nov 25, 13:19 PSTMonitoring - Access to the CircleCI web UI has been restored. We will continue to monitor for 20 minutes.Nov 25, 12:59 PSTInvestigating - The CircleCI web UI is currently unavailable. We are investigating.

Last Update: A few months ago

CircleCI web UI unavailable

Nov 25, 13:19 PSTMonitoring - Access to the CircleCI web UI has been restored. We will continue to monitor for 20 minutes.Nov 25, 12:59 PSTInvestigating - The CircleCI web UI is currently unavailable. We are investigating.

Last Update: A few months ago

CircleCI web UI unavailable

Nov 25, 12:59 PSTInvestigating - The CircleCI web UI is currently unavailable. We are investigating.

Last Update: A few months ago

Workflows unavailable

Nov 24, 12:54 PSTResolved - This incident has been resolved. Thank you for your patience.Nov 24, 12:33 PSTMonitoring - Workflows builds have been restored. We will continue to monitor for 20 minutes.Nov 24, 12:30 PSTIdentified - Workflows for CircleCI 2.0 is currently unavailable. The problem has been identified, and remediation efforts are underway.

Last Update: A few months ago

Workflows unavailable

Nov 24, 12:33 PSTMonitoring - Workflows builds have been restored. We will continue to monitor for 20 minutes.Nov 24, 12:30 PSTIdentified - Workflows for CircleCI 2.0 is currently unavailable. The problem has been identified, and remediation efforts are underway.

Last Update: A few months ago

Workflows unavailable

Nov 24, 12:30 PSTIdentified - Workflows for CircleCI 2.0 is currently unavailable. The problem has been identified, and remediation efforts are underway.

Last Update: A few months ago

CircleCI 2.0 VM Service

Nov 23, 08:32 PSTResolved - This incident has been resolved.Nov 23, 08:13 PSTMonitoring - We have scaled up to handle the queued VM requests and are monitoring the situation.Nov 23, 08:09 PSTInvestigating - macOS, machine, and remote docker builds are failing to allocate vms which will result in build failures. We will update in 20 mins.

Last Update: A few months ago

CircleCI 2.0 VM Service

Nov 23, 08:13 PSTMonitoring - We have scaled up to handle the queued VM requests and are monitoring the situation.Nov 23, 08:09 PSTInvestigating - macOS, machine, and remote docker builds are failing to allocate vms which will result in build failures. We will update in 20 mins.

Last Update: A few months ago

CircleCI 2.0 VM Service

Nov 23, 08:09 PSTInvestigating - macOS, machine, and remote docker builds are failing to allocate vms which will result in build failures. We will update in 20 mins.

Last Update: A few months ago

Switch Organization function broken

Nov 22, 17:46 PSTResolved - We have not seen any recurrence of the Switch Organization function being broken and are considering this resolved. Thank you for your patience.Nov 22, 17:23 PSTMonitoring - The fix for the Switch Organization function has deployed, we will continue to monitor for 20 minutes and then updateNov 22, 16:50 PSTIdentified - We are aware that the Switch Organization function is broken and a fix is being deployed. Will update again in 20 minutes.

Last Update: A few months ago

Switch Organization function broken

Nov 22, 17:23 PSTMonitoring - The fix for the Switch Organization function has deployed, we will continue to monitor for 20 minutes and then updateNov 22, 16:50 PSTIdentified - We are aware that the Switch Organization function is broken and a fix is being deployed. Will update again in 20 minutes.

Last Update: A few months ago

Switch Organization function broken

Nov 22, 16:50 PSTIdentified - We are aware that the Switch Organization function is broken and a fix is being deployed. Will update again in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 15:26 PSTResolved - We have not seen any recurrance of the Documentation or Marketing slow page loads and are closing this incident. Thank you for your patience.Nov 20, 14:41 PSTMonitoring - We have removed the vendor scripting that was causing the slow page loads and are monitoring. We will update again in 20 minutes.Nov 20, 14:25 PSTIdentified - The issue has been identified and a fix is being implemented.Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 14:41 PSTMonitoring - We have removed the vendor scripting that was causing the slow page loads and are monitoring. We will update again in 20 minutes.Nov 20, 14:25 PSTIdentified - The issue has been identified and a fix is being implemented.Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 14:25 PSTIdentified - The issue has been identified and a fix is being implemented.Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

CircleCI Documentation and Marketing pages are slow to load

Nov 20, 14:19 PSTInvestigating - We are aware that our Documentation and Marketing pages are slow to load due to a vendor outage. We are working to restore them and will update in 20 minutes.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 13:25 PSTResolved - We have not seen any slow page loads or other page failures. Thank you for your patience as we worked to resolve this.Nov 16, 12:42 PSTMonitoring - The page load fix has been deployed and is stable. We will monitor for 30 minutes to confirm and update then.Nov 16, 12:02 PSTUpdate - Work continues to return the impacted service to full function. We will update again in 20 minutes.Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 12:42 PSTMonitoring - The page load fix has been deployed and is stable. We will monitor for 30 minutes to confirm and update then.Nov 16, 12:02 PSTUpdate - Work continues to return the impacted service to full function. We will update again in 20 minutes.Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 12:02 PSTUpdate - Work continues to return the impacted service to full function. We will update again in 20 minutes.Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 11:17 PSTUpdate - We are continuing work on the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 10:46 PSTUpdate - We are working on solving the root cause of the page load issue, builds are not being impacted by these changes. We will update again in 20 minutes.Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 10:08 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 09:38 PSTUpdate - The work continues on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 09:16 PSTUpdate - We are continuing to work on solving the root cause of our page load issue. We will update again in 20 minutes.Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 08:53 PSTUpdate - We have resolved the page load problem and are actively working on fixing the root cause. We will update again in 20 minutes.Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 08:31 PSTIdentified - We have deployed a fix to the current page load problem. We will update again in 20 minutes.Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Slow page loads and intermittent page load failures

Nov 16, 08:03 PSTInvestigating - We are investigating slow page loads and intermittent page load failures.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 01:48 PSTResolved - Additional capacity is being brought online to prevent a recurrence of this problem. Thank you for your patience.Nov 9, 01:27 PSTMonitoring - Backlogged macOS 2.0 builds have drained. Next update in 20 minutes.Nov 9, 01:12 PSTUpdate - Backlogged macOS 2.0 builds are beginning to drain. Next update in 20 minutes.Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 01:27 PSTMonitoring - Backlogged macOS 2.0 builds have drained. Next update in 20 minutes.Nov 9, 01:12 PSTUpdate - Backlogged macOS 2.0 builds are beginning to drain. Next update in 20 minutes.Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 01:12 PSTUpdate - Backlogged macOS 2.0 builds are beginning to drain. Next update in 20 minutes.Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 00:50 PSTIdentified - We have identified the cause for the interruption to macOS 2.0 builds. Next update in 20 minutes.Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Interruption to macOS 2.0 builds

Nov 9, 00:24 PSTInvestigating - Engineers are investigating an interruption to macOS 2.0 builds.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:56 PSTResolved - We have not observed any new issues with the Job Scheduler. Thank you for your patience as we work thru this.Nov 8, 15:33 PSTMonitoring - We have deployed a fix for the Scheduled Jobs service and are now monitoring. We will update again in 20 minutes.Nov 8, 15:17 PSTIdentified - We have identified why Scheduled Jobs are not running and are working to fix them. We will update in 20 minutes.Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:33 PSTMonitoring - We have deployed a fix for the Scheduled Jobs service and are now monitoring. We will update again in 20 minutes.Nov 8, 15:17 PSTIdentified - We have identified why Scheduled Jobs are not running and are working to fix them. We will update in 20 minutes.Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:17 PSTIdentified - We have identified why Scheduled Jobs are not running and are working to fix them. We will update in 20 minutes.Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Scheduled Jobs are not running

Nov 8, 15:16 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI Project Settings page broken

Nov 6, 16:26 PSTResolved - This incident has been resolved.Nov 6, 15:51 PSTMonitoring - We have updated our UI code to fix the Project Settings page. We will monitor the situation and update again in 20 minutes.Nov 6, 15:05 PSTIdentified - We are aware that the Project Settings page is broken and are deploying a fix

Last Update: A few months ago

CircleCI Project Settings page broken

Nov 6, 15:51 PSTMonitoring - We have updated our UI code to fix the Project Settings page. We will monitor the situation and update again in 20 minutes.Nov 6, 15:05 PSTIdentified - We are aware that the Project Settings page is broken and are deploying a fix

Last Update: A few months ago

CircleCI Project Settings page broken

Nov 6, 15:05 PSTIdentified - We are aware that the Project Settings page is broken and are deploying a fix

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 09:47 PDT Resolved - No further queuing delays have been observed. Thank you for your patience.Nov 1, 09:26 PDT Monitoring - All faulty build executors have been withdrawn from service. New builds are executing without delay. We will continue to monitor for 20 minutes.Nov 1, 09:00 PDT Update - Remediation work continues. Faulty build executors are being withdrawn from the fleet. Next update in 20 minutes.Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 09:26 PDT Monitoring - All faulty build executors have been withdrawn from service. New builds are executing without delay. We will continue to monitor for 20 minutes.Nov 1, 09:00 PDT Update - Remediation work continues. Faulty build executors are being withdrawn from the fleet. Next update in 20 minutes.Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 09:00 PDT Update - Remediation work continues. Faulty build executors are being withdrawn from the fleet. Next update in 20 minutes.Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 08:41 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 08:17 PDT Update - Remediation work continues. Next update in 20 minutes.Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

CircleCI 2.0 queuing

Nov 1, 07:54 PDT Identified - Builds are experiencing queue times of approximately 2.5 minutes on average. The issue is identified and engineers are working to eliminate wait time

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 06:41 PDT Resolved - No further problems have been observed. Thank you for your patience!Nov 1, 06:19 PDT Monitoring - Queued builds have been flushed and should be executing now.Nov 1, 06:14 PDT Update - Builds are now being restored.Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 06:19 PDT Monitoring - Queued builds have been flushed and should be executing now.Nov 1, 06:14 PDT Update - Builds are now being restored.Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 06:14 PDT Update - Builds are now being restored.Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

Interruption to CircleCI 2.0 build jobs

Nov 1, 05:40 PDT Identified - The processing of queued 2.0 build jobs has been momentarily paused. Next update in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 19:25 PDT Resolved - We have not seen any issues with CircleCI 2.0 jobs starting and are marking this incident as resolved - thank you for your patience!Oct 31, 19:05 PDT Monitoring - We have not seen any CircleCI 2.0 jobs fail to start. We will monitor for any recurrence but feel that this is now resolved.Oct 31, 18:33 PDT Update - The flow of CircleCI 2.0 jobs has resumed. We will update again when we have confirmed that all is back to normalOct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 19:05 PDT Monitoring - We have not seen any CircleCI 2.0 jobs fail to start. We will monitor for any recurrence but feel that this is now resolved.Oct 31, 18:33 PDT Update - The flow of CircleCI 2.0 jobs has resumed. We will update again when we have confirmed that all is back to normalOct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 18:33 PDT Update - The flow of CircleCI 2.0 jobs has resumed. We will update again when we have confirmed that all is back to normalOct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 18:13 PDT Identified - We have identified the reason causing CircleCI 2.0 jobs from not starting and are rolling out the solution. We will update again in 20 minutes.Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 18:00 PDT Update - We are continuing to look into why CircleCI 2.0 jobs are not starting. We will update again in 20 minutes.Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

2.0 Build Jobs not running

Oct 31, 17:36 PDT Investigating - We are looking into the cause that is preventing CircleCI 2.0 jobs from starting. We will update again in 20 minutes.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 08:57 PDT Resolved - Latencies pulling images from Docker Hub have returned to normal.Oct 26, 05:59 PDT Identified - For incident updates from Docker Hub, see - https://tinyurl.com/ycuv3cz5Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 05:59 PDT Identified - For incident updates from Docker Hub, see - https://tinyurl.com/ycuv3cz5Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 05:59 PDT Identified - For incident updates from Docker Hub, see - https://tinyurl.com/ycuv3cz5Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

Slow container image pulls from Docker Hub

Oct 26, 04:33 PDT Investigating - We are investigating reports of slow image pulls from Docker Hub affecting builds on CircleCI 2.0.

Last Update: A few months ago

CircleCI 1.0 build failures

Oct 18, 09:21 PDT Resolved - This issue has been resolved. Thank you for your patience.Oct 18, 09:04 PDT Monitoring - A fix has been implemented. We will continue to monitor for 15 minutes to ensure the problem has been resolved.Oct 18, 08:44 PDT Identified - We are investigating apt package manager failures on CircleCI 1.0. Next update in 20 minutes.

Last Update: A few months ago

CircleCI 1.0 build failures

Oct 18, 09:04 PDT Monitoring - A fix has been implemented. We will continue to monitor for 15 minutes to ensure the problem has been resolved.Oct 18, 08:44 PDT Identified - We are investigating apt package manager failures on CircleCI 1.0. Next update in 20 minutes.

Last Update: A few months ago

CircleCI 1.0 build failures

Oct 18, 08:44 PDT Identified - We are investigating apt package manager failures on CircleCI 1.0. Next update in 20 minutes.

Last Update: A few months ago

CircleCI 2.0 build start queue

Oct 16, 18:14 PDT Resolved - CircleCI 2.0 jobs are building normally, thank you for your patience as we resolved this issueOct 16, 17:50 PDT Monitoring - CircleCI 2.0 jobs are running normally, we will monitor the system to make sure things are back to normalOct 16, 17:27 PDT Investigating - We are exploring why some CircleCI 2.0 builds are failing to start

Last Update: A few months ago

CircleCI 2.0 build start queue

Oct 16, 17:50 PDT Monitoring - CircleCI 2.0 jobs are running normally, we will monitor the system to make sure things are back to normalOct 16, 17:27 PDT Investigating - We are exploring why some CircleCI 2.0 builds are failing to start

Last Update: A few months ago

CircleCI 2.0 build start queue

Oct 16, 17:27 PDT Investigating - We are exploring why some CircleCI 2.0 builds are failing to start

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:49 PDT Resolved - We are not seeing this issue continue, thank you for your patience.Oct 4, 18:28 PDT Monitoring - CircleCI 2.0 Builds are flowing, we will monitor for 15 minutes to ensure everything is ok.Oct 4, 18:24 PDT Identified - The issue has been identified and a fix is being implemented.Oct 4, 18:24 PDT Update - We are exploring what is preventing 2.0 builds from starting.Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:28 PDT Monitoring - CircleCI 2.0 Builds are flowing, we will monitor for 15 minutes to ensure everything is ok.Oct 4, 18:24 PDT Identified - The issue has been identified and a fix is being implemented.Oct 4, 18:24 PDT Update - We are exploring what is preventing 2.0 builds from starting.Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:24 PDT Identified - The issue has been identified and a fix is being implemented.Oct 4, 18:24 PDT Update - We are exploring what is preventing 2.0 builds from starting.Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

CircleCI 2.0 Builds are not starting

Oct 4, 18:18 PDT Investigating - We are currently investigating this issue.

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 12:52 PDT Resolved - The CircleCI web error fix has been stable, thank you for your patience as we worked to fix CircleCI.Oct 4, 12:36 PDT Monitoring - The fix has been deployed and confirmed, we will monitor for another 15 minutes and update again.Oct 4, 12:14 PDT Update - We are rolling out the fix to our web servers, will update again when we have confirmed the fix.Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 12:36 PDT Monitoring - The fix has been deployed and confirmed, we will monitor for another 15 minutes and update again.Oct 4, 12:14 PDT Update - We are rolling out the fix to our web servers, will update again when we have confirmed the fix.Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 12:14 PDT Update - We are rolling out the fix to our web servers, will update again when we have confirmed the fix.Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 11:52 PDT Identified - We have identified the possible source of the increased errors, will update again when we have confirmed the fix.Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Increase in CircleCI web errors

Oct 4, 11:51 PDT Investigating - We are seeing an increase in HTTP 500 errors from our UI

Last Update: A few months ago

Issues with Workflows

Sep 22, 16:58 PDT Resolved - We've identified and fixed the Workflows issue. Please reach out to support@circleci.com if you're still having any issues. Thanks!Sep 22, 16:32 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 22, 16:15 PDT Investigating - We're currently investigating an issue with Workflows and the Workflows UI. We will update again in 30 minutes.

Last Update: A few months ago

Issues with Workflows

Sep 22, 16:32 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 22, 16:15 PDT Investigating - We're currently investigating an issue with Workflows and the Workflows UI. We will update again in 30 minutes.

Last Update: A few months ago

Issues with Workflows

Sep 22, 16:15 PDT Investigating - We're currently investigating an issue with Workflows and the Workflows UI. We will update again in 30 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 08:54 PDT Resolved - The 2.0 build system is again fully functional. If you experience any problems please reach out to our support team.Sep 20, 08:39 PDT Monitoring - We are seeing improvement in DNS resolution and are monitoring the recovery of our systems.Sep 20, 07:24 PDT Identified - Pulling docker images from dockerhub is currently being impacted by DNS lookup failures of docker.io which we believe to be related to the current global DNS incident. Builds are being automatically retried to work around these DNS failures.Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 08:39 PDT Monitoring - We are seeing improvement in DNS resolution and are monitoring the recovery of our systems.Sep 20, 07:24 PDT Identified - Pulling docker images from dockerhub is currently being impacted by DNS lookup failures of docker.io which we believe to be related to the current global DNS incident. Builds are being automatically retried to work around these DNS failures.Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 07:24 PDT Identified - Pulling docker images from dockerhub is currently being impacted by DNS lookup failures of docker.io which we believe to be related to the current global DNS incident. Builds are being automatically retried to work around these DNS failures.Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

2.0 Build System Degraded

Sep 20, 06:52 PDT Investigating - We're seeing a large number of builds on the 2.0 build system failing on start and are in the process of investigating the cause.

Last Update: A few months ago

CircleCI 2.0 Workflow issue

Sep 15, 16:12 PDT Resolved - We have not seen any issues with Workflows and are marking this as resolved. Thank you for your patience as we worked thru the issue.Sep 15, 15:51 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 15, 15:51 PDT Identified - The CircleCI 2.0 feature Workflows had a brief outage of 10 minutes - if any workflow was triggered during that time it will not have started. We have identify the issue and it is fixed. We are monitoring to confirm the fix.

Last Update: A few months ago

CircleCI 2.0 Workflow issue

Sep 15, 15:51 PDT Monitoring - A fix has been implemented and we are monitoring the results.Sep 15, 15:51 PDT Identified - The CircleCI 2.0 feature Workflows had a brief outage of 10 minutes - if any workflow was triggered during that time it will not have started. We have identify the issue and it is fixed. We are monitoring to confirm the fix.

Last Update: A few months ago

Issues with 2.0

Sep 14, 14:12 PDT Resolved - The issue has been resolved. CircleCI Services have returned to normal. Please contact support@circleci.com if you continue to experience any issues.Sep 14, 13:31 PDT Monitoring - The AWS S3 and Dockerhub issues have been resolved. CircleCI services are returning to normal operation. We will continue to monitor the situation closely. Next update in 30 minutes.Sep 14, 13:15 PDT Update - We remain impacted by the AWS S3 and Dockerhub issue. AWS indicates they've identified the issue and are working hard on implementing a fix. Our services will remain impacted until S3 and Dockerhub return to normal operations. We will update again in 30 minutes.Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 13:31 PDT Monitoring - The AWS S3 and Dockerhub issues have been resolved. CircleCI services are returning to normal operation. We will continue to monitor the situation closely. Next update in 30 minutes.Sep 14, 13:15 PDT Update - We remain impacted by the AWS S3 and Dockerhub issue. AWS indicates they've identified the issue and are working hard on implementing a fix. Our services will remain impacted until S3 and Dockerhub return to normal operations. We will update again in 30 minutes.Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 13:15 PDT Update - We remain impacted by the AWS S3 and Dockerhub issue. AWS indicates they've identified the issue and are working hard on implementing a fix. Our services will remain impacted until S3 and Dockerhub return to normal operations. We will update again in 30 minutes.Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 12:43 PDT Identified - The issue has been identified and a fix is being implemented.Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 12:38 PDT Update - CircleCI Services are currently impacted as a result of issues with AWS S3 and Dockerhub. We are investigating the issue and doing everything we can to restore Services. Next update in 30 minutes.Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0

Sep 14, 12:29 PDT Investigating - The 2.0 Build System is currently being impacted by issues with AWS S3 and Dockerhub. We will update again in 30 minutes.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:55 PDT Resolved - The 2.0 build system is functioning properly again. If you continue to experience any problems please contact our support department.Sep 14, 07:45 PDT Monitoring - A fix has been implemented and we are monitoring for any continuing problems.Sep 14, 07:32 PDT Identified - The issue has been identified and we are pushing out a fix.Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:45 PDT Monitoring - A fix has been implemented and we are monitoring for any continuing problems.Sep 14, 07:32 PDT Identified - The issue has been identified and we are pushing out a fix.Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:32 PDT Identified - The issue has been identified and we are pushing out a fix.Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degradation

Sep 14, 07:00 PDT Investigating - We are investigating issues affecting some 2.0 builds preventing them from running correctly and will provide more information as it becomes available.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 13:47 PDT Resolved - We have resolved the incident with the 2.0 Build System. Please don't hesitate to reach out to support@circleci.com if you experience any issues. Thank you for your patience while we resolved the issue.Sep 12, 13:23 PDT Monitoring - We have implemented a solution and will be closely monitoring the 2.0 Build System to ensure platform stability.Sep 12, 12:48 PDT Update - Our engineers continue to work on bringing the 2.0 Build System back to full capacity. Next update again in 30 minutes.Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 13:23 PDT Monitoring - We have implemented a solution and will be closely monitoring the 2.0 Build System to ensure platform stability.Sep 12, 12:48 PDT Update - Our engineers continue to work on bringing the 2.0 Build System back to full capacity. Next update again in 30 minutes.Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 12:48 PDT Update - Our engineers continue to work on bringing the 2.0 Build System back to full capacity. Next update again in 30 minutes.Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

Issues with 2.0 Build System

Sep 12, 12:12 PDT Identified - We have identified an issue with our 2.0 Build System and are working to restore full capacity. We will update again in 30 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 08:08 PDT Resolved - This incident is now resolved. If you continue to see any problems please contact our support department.Sep 8, 08:01 PDT Monitoring - We have implemented a fix and are monitoring the 2.0 build system to ensure that it is stable.Sep 8, 07:49 PDT Update - We are still working on deploying a fix and will provide further information as it becomes available.Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 08:01 PDT Monitoring - We have implemented a fix and are monitoring the 2.0 build system to ensure that it is stable.Sep 8, 07:49 PDT Update - We are still working on deploying a fix and will provide further information as it becomes available.Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 07:49 PDT Update - We are still working on deploying a fix and will provide further information as it becomes available.Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 07:19 PDT Identified - We've tracked down the issue to the infrastructure controlling VM builds specifically and are working to put a fix in place.Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 06:58 PDT Update - We have implemented a fix but there is still an abnormal number of builds that are not processing correctly. We are continuing to investigate the cause behind these build failures.Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Sep 8, 06:28 PDT Investigating - We've detected an interruption in builds running on our 2.0 infrastructure and are now investigating the cause. We will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 20:42 PDT Resolved - The 2.0 Build Cluster has been running jobs and we see no current issues. Thank you for your patience as we solved this issue.Sep 5, 19:58 PDT Monitoring - The backlog of jobs have been processed, we are monitoring the cluster and will update again in 20 minutes.Sep 5, 19:45 PDT Update - The 2.0 Build Cluster is back online and we are processing the backlog of jobs. We will update in 20 minutes.Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 19:58 PDT Monitoring - The backlog of jobs have been processed, we are monitoring the cluster and will update again in 20 minutes.Sep 5, 19:45 PDT Update - The 2.0 Build Cluster is back online and we are processing the backlog of jobs. We will update in 20 minutes.Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 19:45 PDT Update - The 2.0 Build Cluster is back online and we are processing the backlog of jobs. We will update in 20 minutes.Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 19:20 PDT Update - We are still working on getting the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 18:50 PDT Update - We are continuing to work on restoring the 2.0 Build Cluster. Will update in 20 minutes.Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 18:24 PDT Update - We are continuing the work to bring the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 18:05 PDT Update - We are still working on bringing the 2.0 Build Cluster online. Will update in 20 minutes.Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 17:44 PDT Identified - We have identified the cause of the issue and are working on a solution. We will update again in 20 minutes.Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

2.0 Build Cluster is at reduced capacity

Sep 5, 17:31 PDT Investigating - We are looking into the sudden reduction of capacity for our 2.0 Build Cluster. We will update again in 20 minutes

Last Update: A few months ago

Missing tabs on build page

Aug 28, 22:27 PDT Resolved - This incident has been resolved.Aug 28, 22:02 PDT Monitoring - We've updated our frontend assets; tabs should be visible on build pages again. Please reach out to support@circleci.com if you're having any issues. Thanks for bearing with us!Aug 28, 21:16 PDT Identified - We are continuing to work on reverting frontend assets. Next update in 30 minutes.Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Missing tabs on build page

Aug 28, 22:02 PDT Monitoring - We've updated our frontend assets; tabs should be visible on build pages again. Please reach out to support@circleci.com if you're having any issues. Thanks for bearing with us!Aug 28, 21:16 PDT Identified - We are continuing to work on reverting frontend assets. Next update in 30 minutes.Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Missing tabs on build page

Aug 28, 21:16 PDT Identified - We are continuing to work on reverting frontend assets. Next update in 30 minutes.Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Missing tabs on build page

Aug 28, 20:43 PDT Investigating - Tabs are currently missing on build pages. We are reverting our frontend assets now. Will update again in 30 minutes.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 17:05 PDT Resolved - This incident has been resolved.Aug 28, 15:26 PDT Monitoring - Our engineers have implemented a fix to the dashboard issue. Please refresh your browser or reopen the dashboard in a new tab. Reach out to support@circleci.com if you're still experiencing any issues. Thanks for your patience!Aug 28, 14:53 PDT Identified - We believe we have identified the issue with the web dashboard and are working on deploying an update. We will update again in 30 minutes with more information.Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 15:26 PDT Monitoring - Our engineers have implemented a fix to the dashboard issue. Please refresh your browser or reopen the dashboard in a new tab. Reach out to support@circleci.com if you're still experiencing any issues. Thanks for your patience!Aug 28, 14:53 PDT Identified - We believe we have identified the issue with the web dashboard and are working on deploying an update. We will update again in 30 minutes with more information.Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 14:53 PDT Identified - We believe we have identified the issue with the web dashboard and are working on deploying an update. We will update again in 30 minutes with more information.Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Issues loading web dashboard

Aug 28, 14:44 PDT Investigating - We are currently investigating errors when loading the web dashboard. We will update with more details shortly.

Last Update: A few months ago

Some 2.0 builds are queued

Aug 23, 18:42 PDT Resolved - This incident has been resolved.Aug 23, 17:44 PDT Monitoring - We have identified the issue and deployed a fix. We are monitoring the services and will update again in 20 minutes.

Last Update: A few months ago

Some 2.0 builds are queued

Aug 23, 17:44 PDT Monitoring - We have identified the issue and deployed a fix. We are monitoring the services and will update again in 20 minutes.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 08:11 PDT Resolved - The GitHub API has recovered, we are receiving hooks at an expected rate, our UI and builds are operating normally.Aug 21, 07:35 PDT Monitoring - We are seeing recovery accessing the GitHub API, and are have begun receiving push hooks. Our UI is operational and builds are running.Aug 21, 07:03 PDT Identified - We continue to see connectivity issues with GitHub which are impacting both our user-interface and our ability to run builds for GitHub projects.Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 07:35 PDT Monitoring - We are seeing recovery accessing the GitHub API, and are have begun receiving push hooks. Our UI is operational and builds are running.Aug 21, 07:03 PDT Identified - We continue to see connectivity issues with GitHub which are impacting both our user-interface and our ability to run builds for GitHub projects.Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 07:03 PDT Identified - We continue to see connectivity issues with GitHub which are impacting both our user-interface and our ability to run builds for GitHub projects.Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

GitHub issues impacting CircleCI UI and builds

Aug 21, 06:24 PDT Investigating - We're seeing issues calling the GitHub API which is impacting our frontend and ability to launch builds for GitHub projects. We have also seen a severe drop in incoming webhooks from GitHub notifying us of pushes to repositories.

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 15:20 PDT Resolved - The issue has been resolved. We will continue to monitor the situation. Please don't hesitate to reach out to support@circleci.com if you experience any further issues. Thank you for your patience!Aug 15, 14:44 PDT Update - We are continuing to monitor the situation closely and will follow up with another update in 30 minutes.Aug 15, 14:10 PDT Monitoring - We have identified and fixed an internal issue. Additionally, AWS IAM has resumed service. Builds and the UI are both back to normal and we are monitoring closely. We will update again in 30 minutes.Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 14:44 PDT Update - We are continuing to monitor the situation closely and will follow up with another update in 30 minutes.Aug 15, 14:10 PDT Monitoring - We have identified and fixed an internal issue. Additionally, AWS IAM has resumed service. Builds and the UI are both back to normal and we are monitoring closely. We will update again in 30 minutes.Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 14:10 PDT Monitoring - We have identified and fixed an internal issue. Additionally, AWS IAM has resumed service. Builds and the UI are both back to normal and we are monitoring closely. We will update again in 30 minutes.Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 13:54 PDT Update - We are still working on isolating the root cause of the issue. We're still seeing increased error rates for AWS IAM and believe there is a strong correlation. Next update in 20 minutes.Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 13:12 PDT Update - We're continuing to investigate and watching closely for changes as AWS brings IAM back into service. Will update in 20 minutes.Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 12:37 PDT Update - Our engineers continue to work towards identifying the UI and build queue issue. There has been a large increase in errors when making API calls to AWS, which we believe is affecting our services. We will update again in 20 minutes.Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 12:20 PDT Update - We are still working on investigating the UI issue and build queues. Our team has narrowed down the issue and is working towards a solution. Next update in 20 minutes.Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 11:56 PDT Update - Our engineers are continuing to investigate the UI and build queuing issues. Next update in 20 minutes.Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 11:37 PDT Update - We're continuing to investigate issues with the UI and sporadic build queuing. We will update again in 20 minutes.Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Intermittent site/UI issues

Aug 15, 11:17 PDT Investigating - "We're currently experiencing intermittent issues with both build queues and our UI. We're investigating and will update in 20 mins."

Last Update: A few months ago

Some 2.0 builds are queuing indefinitely

Aug 15, 07:26 PDT Resolved - This incident has been resolved.Aug 15, 06:55 PDT Identified - We have identified an issue causing some builds on 2.0 to queue indefinitely. We are currently fixing the issue.

Last Update: A few months ago

Some 2.0 builds are queuing indefinitely

Aug 15, 06:55 PDT Identified - We have identified an issue causing some builds on 2.0 to queue indefinitely. We are currently fixing the issue.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 12, 01:15 PDT Resolved - The fix has been deployed to production and users are able to see the project dashboard when they are not logged in.Aug 12, 00:14 PDT Update - We have shipped a fix and expect it to be fully rolled out to production soon. Will provide another update in 60 minutes.Aug 11, 23:54 PDT Identified - We have identified the root cause of this issue and are working on shipping a fix. Will provide an update in 20 minutes.Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 12, 00:14 PDT Update - We have shipped a fix and expect it to be fully rolled out to production soon. Will provide another update in 60 minutes.Aug 11, 23:54 PDT Identified - We have identified the root cause of this issue and are working on shipping a fix. Will provide an update in 20 minutes.Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 11, 23:54 PDT Identified - We have identified the root cause of this issue and are working on shipping a fix. Will provide an update in 20 minutes.Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Issue Loading Project Dashboard For Logged Out Users

Aug 11, 23:40 PDT Investigating - We are investigating an issue with loading the project dashboard for a public project when a user is logged out. Will provide a status update in 30 minutes.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 12:18 PDT Resolved - The support forum https://discuss.circleci.com/ is back at full operation.Aug 11, 12:02 PDT Monitoring - Engineers have enabled a repair and are monitoring the Discuss forum.Aug 11, 11:51 PDT Update - Engineers are continuing work on the resolution.Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 12:02 PDT Monitoring - Engineers have enabled a repair and are monitoring the Discuss forum.Aug 11, 11:51 PDT Update - Engineers are continuing work on the resolution.Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 11:51 PDT Update - Engineers are continuing work on the resolution.Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Discuss Forum Degraded

Aug 11, 11:31 PDT Identified - The forum https://discuss.circleci.com/ is experiencing degraded performance. Engineers have identified the issue and are working on a solution.

Last Update: A few months ago

Workflows Service Database Upgrade

Aug 9, 13:21 PDT Scheduled - We will be upgrading the database used by our Workflows feature. The database will be upgraded and restarted, during this time all Workflow jobs will be queued and started when the task is done. We expect this upgrade to take 20 minutes.

Last Update: A few months ago

UI failing to add new projects

Aug 9, 13:16 PDT Resolved - At this time the issue appears to have been resolved. If you continue to see problems please reach out to our support department.Aug 9, 13:04 PDT Monitoring - We've deployed the fix successfully and are monitoring this issue to ensure it has been corrected.Aug 9, 12:33 PDT Update - We're currently deploying a fix and will provide an update when we have further information.Aug 9, 12:09 PDT Identified - We've identified the cause of this issue and are developing a fix. We will provide more information as it becomes available.Aug 9, 11:59 PDT Update - We are continuing to investigate the cause of this issue and will provide an additional update in 20 minutes.Aug 9, 11:38 PDT Investigating - We have received reports that adding new projects is not currently working correctly and are investigating the cause. We will provide a status update in 20 minutes.

Last Update: A few months ago

UI failing to add new projects

Aug 9, 13:04 PDT Monitoring - We've deployed the fix successfully and are monitoring this issue to ensure it has been corrected.Aug 9, 12:33 PDT Update - We're currently deploying a fix and will provide an update when we have further information.Aug 9, 12:09 PDT Identified - We've identified the cause of this issue and are developing a fix. We will provide more information as it becomes available.Aug 9, 11:59 PDT Update - We are continuing to investigate the cause of this issue and will provide an additional update in 20 minutes.Aug 9, 11:38 PDT Investigating - We have received reports that adding new projects is not currently working correctly and are investigating the cause. We will provide a status update in 20 minutes.

Last Update: A few months ago

UI failing to add new projects

Aug 9, 12:33 PDT Update - We're currently deploying a fix and will provide an update when we have further information.Aug 9, 12:09 PDT Identified - We've identified the cause of this issue and are developing a fix. We will provide more information as it becomes available.Aug 9, 11:59 PDT Update - We are continuing to investigate the cause of this issue and will provide an additional update in 20 minutes.Aug 9, 11:38 PDT Investigating - We have received reports that adding new projects is not currently working correctly and are investigating the cause. We will provide a status update in 20 minutes.

Last Update: A few months ago

UI failing to add new projects

Aug 9, 12:09 PDT Identified - We've identified the cause of this issue and are developing a fix. We will provide more information as it becomes available.Aug 9, 11:59 PDT Update - We are continuing to investigate the cause of this issue and will provide an additional update in 20 minutes.Aug 9, 11:38 PDT Investigating - We have received reports that adding new projects is not currently working correctly and are investigating the cause. We will provide a status update in 20 minutes.

Last Update: A few months ago

UI failing to add new projects

Aug 9, 11:59 PDT Update - We are continuing to investigate the cause of this issue and will provide an additional update in 20 minutes.Aug 9, 11:38 PDT Investigating - We have received reports that adding new projects is not currently working correctly and are investigating the cause. We will provide a status update in 20 minutes.

Last Update: A few months ago

UI failing to add new projects

Aug 9, 11:38 PDT Investigating - We have received reports that adding new projects is not currently working correctly and are investigating the cause. We will provide a status update in 20 minutes.

Last Update: A few months ago

High number of failed 2.0 builds

Aug 7, 14:23 PDT Resolved - This incident has been resolved.Aug 7, 14:02 PDT Monitoring - We have identified the issue and deployed a fix. We are monitoring the services and will update again in 20 minutes.Aug 7, 13:44 PDT Investigating - We've detected an abnormal number of failures in our 2.0 build fleets and are investigating now. We will provide an additional update in 20 minutes.

Last Update: A few months ago

High number of failed 2.0 builds

Aug 7, 14:02 PDT Monitoring - We have identified the issue and deployed a fix. We are monitoring the services and will update again in 20 minutes.Aug 7, 13:44 PDT Investigating - We've detected an abnormal number of failures in our 2.0 build fleets and are investigating now. We will provide an additional update in 20 minutes.

Last Update: A few months ago

High number of failed 2.0 builds

Aug 7, 13:44 PDT Investigating - We've detected an abnormal number of failures in our 2.0 build fleets and are investigating now. We will provide an additional update in 20 minutes.

Last Update: A few months ago

OS X builds queueing

Aug 4, 03:15 PDT Resolved - This incident has been resolved.Aug 4, 02:53 PDT Monitoring - We have replaced impacted OS X builders to bypass the network issues and are no longer seeing problems.Aug 4, 02:22 PDT Identified - We are draining the queue and have identified networking problems which we continue to work on.Aug 4, 01:34 PDT Update - We are working on restoring OS X build availability while investigating networking issues in the underlying infrastructure. Will update again in 20 minutesAug 4, 01:11 PDT Investigating - Some of the OS X builds are queueing, we are investigating the cause of the issue and will update in 20 minutes.

Last Update: A few months ago

OS X builds queueing

Aug 4, 02:53 PDT Monitoring - We have replaced impacted OS X builders to bypass the network issues and are no longer seeing problems.Aug 4, 02:22 PDT Identified - We are draining the queue and have identified networking problems which we continue to work on.Aug 4, 01:34 PDT Update - We are working on restoring OS X build availability while investigating networking issues in the underlying infrastructure. Will update again in 20 minutesAug 4, 01:11 PDT Investigating - Some of the OS X builds are queueing, we are investigating the cause of the issue and will update in 20 minutes.

Last Update: A few months ago

OS X builds queueing

Aug 4, 02:22 PDT Identified - We are draining the queue and have identified networking problems which we continue to work on.Aug 4, 01:34 PDT Update - We are working on restoring OS X build availability while investigating networking issues in the underlying infrastructure. Will update again in 20 minutesAug 4, 01:11 PDT Investigating - Some of the OS X builds are queueing, we are investigating the cause of the issue and will update in 20 minutes.

Last Update: A few months ago

OS X builds queueing

Aug 4, 01:34 PDT Update - We are working on restoring OS X build availability while investigating networking issues in the underlying infrastructure. Will update again in 20 minutesAug 4, 01:11 PDT Investigating - Some of the OS X builds are queueing, we are investigating the cause of the issue and will update in 20 minutes.

Last Update: A few months ago

OS X builds queueing

Aug 4, 01:11 PDT Investigating - Some of the OS X builds are queueing, we are investigating the cause of the issue and will update in 20 minutes.

Last Update: A few months ago

Dispatch Database Upgrade

Jul 28, 22:27 PDT Completed - The scheduled maintenance has been completed.Jul 28, 22:17 PDT Verifying - Verification is currently underway for the maintenance items.Jul 28, 22:01 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 28, 12:28 PDT Scheduled - We will be upgrading the database that is involved in our build dispatching - during this time no jobs will be dispatched but they will continue to be queued. No loss of builds is expected and the total time of the outage is anticipated to be 15 minutes.

Last Update: A few months ago

Dispatch Database Upgrade

Jul 28, 22:17 PDT Verifying - Verification is currently underway for the maintenance items.Jul 28, 22:01 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 28, 12:28 PDT Scheduled - We will be upgrading the database that is involved in our build dispatching - during this time no jobs will be dispatched but they will continue to be queued. No loss of builds is expected and the total time of the outage is anticipated to be 15 minutes.

Last Update: A few months ago

Dispatch Database Upgrade

Jul 28, 22:01 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 28, 12:28 PDT Scheduled - We will be upgrading the database that is involved in our build dispatching - during this time no jobs will be dispatched but they will continue to be queued. No loss of builds is expected and the total time of the outage is anticipated to be 15 minutes.

Last Update: A few months ago

Dispatch Database Upgrade

Jul 28, 12:28 PDT Scheduled - We will be upgrading the database that is involved in our build dispatching - during this time no jobs will be dispatched but they will continue to be queued. No loss of builds is expected and the total time of the outage is anticipated to be 15 minutes.

Last Update: A few months ago

Queued builds not starting

Jul 25, 16:51 PDT Resolved - All of the older queued builds have been processed. Thank you for your patience as we handled this incident.Jul 25, 16:15 PDT Monitoring - We have processed all older builds and are monitoring the system for any other issues. We will update in 20 minutes.Jul 25, 15:24 PDT Update - We have started to re-enqueue older builds where possible. We will update in 20 minutes.Jul 25, 14:32 PDT Update - We are still working on the re-enqueue solution. We will update in 20 minutes.Jul 25, 14:05 PDT Identified - We are working on re-enqueuing all stuck builds for which there is not a newer build. We will update in 20 minutes.Jul 25, 13:09 PDT Investigating - We are looking into why queued builds are not starting

Last Update: A few months ago

Queued builds not starting

Jul 25, 16:15 PDT Monitoring - We have processed all older builds and are monitoring the system for any other issues. We will update in 20 minutes.Jul 25, 15:24 PDT Update - We have started to re-enqueue older builds where possible. We will update in 20 minutes.Jul 25, 14:32 PDT Update - We are still working on the re-enqueue solution. We will update in 20 minutes.Jul 25, 14:05 PDT Identified - We are working on re-enqueuing all stuck builds for which there is not a newer build. We will update in 20 minutes.Jul 25, 13:09 PDT Investigating - We are looking into why queued builds are not starting

Last Update: A few months ago

Queued builds not starting

Jul 25, 15:24 PDT Update - We have started to re-enqueue older builds where possible. We will update in 20 minutes.Jul 25, 14:32 PDT Update - We are still working on the re-enqueue solution. We will update in 20 minutes.Jul 25, 14:05 PDT Identified - We are working on re-enqueuing all stuck builds for which there is not a newer build. We will update in 20 minutes.Jul 25, 13:09 PDT Investigating - We are looking into why queued builds are not starting

Last Update: A few months ago

Queued builds not starting

Jul 25, 14:32 PDT Update - We are still working on the re-enqueue solution. We will update in 20 minutes.Jul 25, 14:05 PDT Identified - We are working on re-enqueuing all stuck builds for which there is not a newer build. We will update in 20 minutes.Jul 25, 13:09 PDT Investigating - We are looking into why queued builds are not starting

Last Update: A few months ago

Queued builds not starting

Jul 25, 14:05 PDT Identified - We are working on re-enqueuing all stuck builds for which there is not a newer build. We will update in 20 minutes.Jul 25, 13:09 PDT Investigating - We are looking into why queued builds are not starting

Last Update: A few months ago

Queued builds not starting

Jul 25, 13:09 PDT Investigating - We are looking into why queued builds are not starting

Last Update: A few months ago

2.0 Build System Degraded

Jul 25, 12:56 PDT Resolved - The 2.0 builds are flowing normally and we have not seen this issue return for 20 minutes, thank you for your patience as we worked thru the incident.Jul 25, 12:33 PDT Monitoring - We have fixed the underlying issue and are monitoring the services. We will update in 20 minutes.Jul 25, 12:15 PDT Update - We are working on restoring the 2.0 builds. We will update in 20 minutes.Jul 25, 11:48 PDT Identified - We have discovered the issue that is preventing new 2.0 jobs from starting and are working on a fix. We will update in 20 minutes.Jul 25, 11:23 PDT Investigating - The 2.0 build system has entered a degraded state. We are investigating the cause and will provide an update in 20 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Jul 25, 12:33 PDT Monitoring - We have fixed the underlying issue and are monitoring the services. We will update in 20 minutes.Jul 25, 12:15 PDT Update - We are working on restoring the 2.0 builds. We will update in 20 minutes.Jul 25, 11:48 PDT Identified - We have discovered the issue that is preventing new 2.0 jobs from starting and are working on a fix. We will update in 20 minutes.Jul 25, 11:23 PDT Investigating - The 2.0 build system has entered a degraded state. We are investigating the cause and will provide an update in 20 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Jul 25, 12:15 PDT Update - We are working on restoring the 2.0 builds. We will update in 20 minutes.Jul 25, 11:48 PDT Identified - We have discovered the issue that is preventing new 2.0 jobs from starting and are working on a fix. We will update in 20 minutes.Jul 25, 11:23 PDT Investigating - The 2.0 build system has entered a degraded state. We are investigating the cause and will provide an update in 20 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Jul 25, 11:48 PDT Identified - We have discovered the issue that is preventing new 2.0 jobs from starting and are working on a fix. We will update in 20 minutes.Jul 25, 11:23 PDT Investigating - The 2.0 build system has entered a degraded state. We are investigating the cause and will provide an update in 20 minutes.

Last Update: A few months ago

2.0 Build System Degraded

Jul 25, 11:23 PDT Investigating - The 2.0 build system has entered a degraded state. We are investigating the cause and will provide an update in 20 minutes.

Last Update: A few months ago

Build system outage

Jul 24, 13:21 PDT Resolved - Our build system is fully recovered at this time. If you continue to see errors please reach out to our support team.Jul 24, 13:05 PDT Update - The build system has recovered and new builds are being processed as we receive them. We are continuing to monitor the recovery.Jul 24, 12:40 PDT Monitoring - We've identified the root cause and are monitoring the recovery.Jul 24, 12:30 PDT Investigating - We are seeing errors with new and running builds. We are investigating the cause and will update in 20 minutes.

Last Update: A few months ago

2.0 Build System Queueing

Jul 24, 13:21 PDT Resolved - Our build system is fully recovered at this time. If you continue to see errors please reach out to our support team.Jul 24, 12:40 PDT Monitoring - We've identified the root cause and are monitoring the recovery.Jul 24, 12:25 PDT Investigating - We are responding to alerts from our monitoring system indicating abnormal levels of queueing on our 2.0 build system.

Last Update: A few months ago

Build system outage

Jul 24, 13:05 PDT Update - The build system has recovered and new builds are being processed as we receive them. We are continuing to monitor the recovery.Jul 24, 12:40 PDT Monitoring - We've identified the root cause and are monitoring the recovery.Jul 24, 12:30 PDT Investigating - We are seeing errors with new and running builds. We are investigating the cause and will update in 20 minutes.

Last Update: A few months ago

2.0 Build System Queueing

Jul 24, 12:40 PDT Monitoring - We've identified the root cause and are monitoring the recovery.Jul 24, 12:25 PDT Investigating - We are responding to alerts from our monitoring system indicating abnormal levels of queueing on our 2.0 build system.

Last Update: A few months ago

Build system outage

Jul 24, 12:40 PDT Monitoring - We've identified the root cause and are monitoring the recovery.Jul 24, 12:30 PDT Investigating - We are seeing errors with new and running builds. We are investigating the cause and will update in 20 minutes.

Last Update: A few months ago

Build system outage

Jul 24, 12:30 PDT Investigating - We are seeing errors with new and running builds. We are investigating the cause and will update in 20 minutes.

Last Update: A few months ago

2.0 Build System Queueing

Jul 24, 12:25 PDT Investigating - We are responding to alerts from our monitoring system indicating abnormal levels of queueing on our 2.0 build system.

Last Update: A few months ago

Issue with Workflows UI

Jul 18, 15:58 PDT Resolved - The UI issue has been resolved. Thanks for your patience. Please reach out to support@circleci.com if you experience any further issues.Jul 18, 15:33 PDT Monitoring - The CircleCI UI is now working. We will monitor the solution and update again in 20 minutes.Jul 18, 15:08 PDT Update - We have determined the root cause and are deploying a solution. We will update again in 20 minutes.Jul 18, 14:37 PDT Identified - We're experiencing an issue with Workflows UI presently. Our engineers have identified the cause and are working on a fix. Next update in 20 minutes.

Last Update: A few months ago

Issue with Workflows UI

Jul 18, 15:33 PDT Monitoring - The CircleCI UI is now working. We will monitor the solution and update again in 20 minutes.Jul 18, 15:08 PDT Update - We have determined the root cause and are deploying a solution. We will update again in 20 minutes.Jul 18, 14:37 PDT Identified - We're experiencing an issue with Workflows UI presently. Our engineers have identified the cause and are working on a fix. Next update in 20 minutes.

Last Update: A few months ago

Issue with Workflows UI

Jul 18, 15:08 PDT Update - We have determined the root cause and are deploying a solution. We will update again in 20 minutes.Jul 18, 14:37 PDT Identified - We're experiencing an issue with Workflows UI presently. Our engineers have identified the cause and are working on a fix. Next update in 20 minutes.

Last Update: A few months ago

Issue with Workflows UI

Jul 18, 14:37 PDT Identified - We're experiencing an issue with Workflows UI presently. Our engineers have identified the cause and are working on a fix. Next update in 20 minutes.

Last Update: A few months ago

MacOS Build System Degredation

Jul 7, 09:50 PDT Resolved - At this time we have completed the rollback and confirmed that builds are being processed correctly. If you continue to see any problems please contact our support department at support@circleci.com and let us know.Jul 7, 09:17 PDT Identified - During a routine upgrade of our build system we discovered the upgraded infrastructure was not working correctly. We are rolling back the upgrade now but in the meantime some MacOS builds may not run correctly.

Last Update: A few months ago

MacOS Build System Degredation

Jul 7, 09:17 PDT Identified - During a routine upgrade of our build system we discovered the upgraded infrastructure was not working correctly. We are rolling back the upgrade now but in the meantime some MacOS builds may not run correctly.

Last Update: A few months ago

MacOS build system outage

Jun 30, 17:38 PDT Resolved - At this time the MacOS build system is functioning again and all builds that were queued have been processed. If you continue to see any issues please reach out to our support department.Jun 30, 16:59 PDT Identified - Our mobile build system infrastructure is currently experiencing an outage on systems running our Xcode 8.3.2 build image. We are working to ensure that this is resolved but in the meantime builds using Xcode 8.3.2 may be severely delayed.

Last Update: A few months ago

MacOS build system outage

Jun 30, 16:59 PDT Identified - Our mobile build system infrastructure is currently experiencing an outage on systems running our Xcode 8.3.2 build image. We are working to ensure that this is resolved but in the meantime builds using Xcode 8.3.2 may be severely delayed.

Last Update: A few months ago

Docker is not usable on CircleCI 1.0

Jun 23, 10:40 PDT Resolved - We have marked this as resolved, thank you for your patience during this event.Jun 23, 08:32 PDT Monitoring - At this time all new builds should be able to use docker. We will continue to monitor this issue to ensure the incident is properly resolved.Jun 23, 07:56 PDT Update - An update with a fix is in the process of rolling outJun 23, 06:58 PDT Update - We have started rolling out a fixJun 23, 06:09 PDT Identified - We've identified an issue with Docker builds in CircleCI 1.0. An update is being rolled out to correct the issue

Last Update: A few months ago

Docker is not usable on CircleCI 1.0

Jun 23, 08:32 PDT Monitoring - At this time all new builds should be able to use docker. We will continue to monitor this issue to ensure the incident is properly resolved.Jun 23, 07:56 PDT Update - An update with a fix is in the process of rolling outJun 23, 06:58 PDT Update - We have started rolling out a fixJun 23, 06:09 PDT Identified - We've identified an issue with Docker builds in CircleCI 1.0. An update is being rolled out to correct the issue

Last Update: A few months ago

Docker is not usable on CircleCI 1.0

Jun 23, 07:56 PDT Update - An update with a fix is in the process of rolling outJun 23, 06:58 PDT Update - We have started rolling out a fixJun 23, 06:09 PDT Identified - We've identified an issue with Docker builds in CircleCI 1.0. An update is being rolled out to correct the issue

Last Update: A few months ago

Docker is not usable on CircleCI 1.0

Jun 23, 06:58 PDT Update - We have started rolling out a fixJun 23, 06:09 PDT Identified - We've identified an issue with Docker builds in CircleCI 1.0. An update is being rolled out to correct the issue

Last Update: A few months ago

Docker is not usable on CircleCI 1.0

Jun 23, 06:09 PDT Identified - We've identified an issue with Docker builds in CircleCI 1.0. An update is being rolled out to correct the issue

Last Update: A few months ago

Docker is not usable on CircleCI 1.0

Jun 23, 06:09 PDT Identified - We've identified an issue with Docker builds in CircleCI 1.0. An update is being rolled out to correct the issue

Last Update: A few months ago

2.0 Build System Degraded

Jun 22, 17:31 PDT Resolved - We have seen no further build failures at this time. Please reach out to support@circleci.com if you continue to see any unusual build failures. Thanks for your patience!Jun 22, 17:22 PDT Monitoring - We have identified a potential cause of this issue and put in place a fix. We will monitor this problem to ensure it is resolved properly.Jun 22, 17:18 PDT Investigating - We're seeing a large number of build failures in our 2.0 build system and are currently investigating the root cause. We will update this outage as more information becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Jun 22, 17:22 PDT Monitoring - We have identified a potential cause of this issue and put in place a fix. We will monitor this problem to ensure it is resolved properly.Jun 22, 17:18 PDT Investigating - We're seeing a large number of build failures in our 2.0 build system and are currently investigating the root cause. We will update this outage as more information becomes available.

Last Update: A few months ago

2.0 Build System Degraded

Jun 22, 17:18 PDT Investigating - We're seeing a large number of build failures in our 2.0 build system and are currently investigating the root cause. We will update this outage as more information becomes available.

Last Update: A few months ago

2.0 builds are queueing

Jun 19, 16:45 PDT Resolved - We have not seen any recurring 2.0 Build Queues during our monitoring period. Thank you for your patience during the event.Jun 19, 16:16 PDT Monitoring - A fix has been implemented and we are monitoring the results.Jun 19, 16:16 PDT Update - We have deployed a fix for the 2.0 Build Queue issue and are monitoring. We will update again in 20 minutes.Jun 19, 15:32 PDT Identified - We have located the issue causing 2.0 Build queuing and are working on a fix. We will update again in 20 minutes.Jun 19, 15:10 PDT Update - We are still exploring why 2.0 Builds are queuing. We will update again in 20 minutes.Jun 19, 14:05 PDT Investigating - We're investigating an issue that is making 2.0 builds queue.

Last Update: A few months ago

2.0 builds are queueing

Jun 19, 16:16 PDT Monitoring - A fix has been implemented and we are monitoring the results.Jun 19, 16:16 PDT Update - We have deployed a fix for the 2.0 Build Queue issue and are monitoring. We will update again in 20 minutes.Jun 19, 15:32 PDT Identified - We have located the issue causing 2.0 Build queuing and are working on a fix. We will update again in 20 minutes.Jun 19, 15:10 PDT Update - We are still exploring why 2.0 Builds are queuing. We will update again in 20 minutes.Jun 19, 14:05 PDT Investigating - We're investigating an issue that is making 2.0 builds queue.

Last Update: A few months ago

2.0 builds are queueing

Jun 19, 15:32 PDT Identified - We have located the issue causing 2.0 Build queuing and are working on a fix. We will update again in 20 minutes.Jun 19, 15:10 PDT Update - We are still exploring why 2.0 Builds are queuing. We will update again in 20 minutes.Jun 19, 14:05 PDT Investigating - We're investigating an issue that is making 2.0 builds queue.

Last Update: A few months ago

2.0 builds are queueing

Jun 19, 15:10 PDT Update - We are still exploring why 2.0 Builds are queuing. We will update again in 20 minutes.Jun 19, 14:05 PDT Investigating - We're investigating an issue that is making 2.0 builds queue.

Last Update: A few months ago

2.0 builds are queueing

Jun 19, 14:05 PDT Investigating - We're investigating an issue that is making 2.0 builds queue.

Last Update: A few months ago

2.0 Build System Performance Degredation

Jun 19, 10:31 PDT Resolved - We are no longer seeing any problems starting 2.0 builds and the system appears to be working as expected. If you see any issues though please reach out to our support department.Jun 19, 09:54 PDT Monitoring - We are no longer seeing errors starting up 2.0 builds and are monitoring the build system for any regressions.Jun 19, 09:37 PDT Identified - We are experiencing failures running some builds with machine executor and builds requiring remote docker engine. We're investigating solutions and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Performance Degredation

Jun 19, 09:54 PDT Monitoring - We are no longer seeing errors starting up 2.0 builds and are monitoring the build system for any regressions.Jun 19, 09:37 PDT Identified - We are experiencing failures running some builds with machine executor and builds requiring remote docker engine. We're investigating solutions and will provide more information as it becomes available.

Last Update: A few months ago

2.0 Build System Performance Degredation

Jun 19, 09:37 PDT Identified - We are experiencing failures running some builds with machine executor and builds requiring remote docker engine. We're investigating solutions and will provide more information as it becomes available.

Last Update: A few months ago

Build System Degradation

Jun 14, 07:59 PDT Resolved - We've identified and corrected the problem with our build system. Builds are processing correctly again but if you continue to see any issues please contact our support team.Jun 14, 07:52 PDT Investigating - We're seeing builds beginning to queue and are investigating the cause. Builds may take some time before they start running but we will release more information as it becomes available.

Last Update: A few months ago

Build System Degradation

Jun 14, 07:52 PDT Investigating - We're seeing builds beginning to queue and are investigating the cause. Builds may take some time before they start running but we will release more information as it becomes available.

Last Update: A few months ago

macOS build queue

Jun 12, 21:09 PDT Resolved - The queue has been processed and builds are running normally. If you continue to see any delays please open a ticket with our support department.Jun 12, 15:10 PDT Identified - Builds on xcode 8.3 are currently queuing due to a backlog of processing

Last Update: A few months ago

macOS build queue

Jun 12, 15:10 PDT Identified - Builds on xcode 8.3 are currently queuing due to a backlog of processing

Last Update: A few months ago

macOS build queue

Jun 12, 15:10 PDT Identified - Builds on xcode 8.3 are currently queuing due to a backlog of processing

Last Update: A few months ago

Environment Variables Leaked on Open-Source Projects Allowing Pull Request Builds

Jun 5, 15:01 PDT Resolved - CircleCI engineers identified a bug late Friday afternoon affecting open-source projects that allow building pull requests. By default, the fork pull request builds got passed any environment variables the parent repo is configured with, despite claims in the project settings. An engineer patched the issue Friday for the SaaS product and released an Enterprise version Saturday. The issue affected a small subset of users, each of which was personally notified Saturday so project administrators had time to rotate credentials ahead of this public disclosure. If you didn't received an email, this disclosure does not affect you. You can also double check fork pull request behavior under the "Advanced Settings" section in projects. There is no evidence of lost data or known malicious behavior as a result of the issue.

Last Update: A few months ago

Major GitHub outage affecting builds and the CircleCI webapp

May 31, 07:59 PDT Resolved - This incident has been resolved.May 31, 07:57 PDT Update - API responses from GitHub are now performing normally.May 31, 06:08 PDT Update - GitHub have declared the outage resolved and we are starting to see incoming GitHub hooks. Builds are being triggered again. However we are still seeing failures with the GitHub API. This continues to prevent our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.May 31, 05:45 PDT Monitoring - GitHub have declared a major outage. This is currently preventing many builds from starting and also prevents our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.

Last Update: A few months ago

Major GitHub outage affecting builds and the CircleCI webapp

May 31, 07:57 PDT Update - API responses from GitHub are now performing normally.May 31, 06:08 PDT Update - GitHub have declared the outage resolved and we are starting to see incoming GitHub hooks. Builds are being triggered again. However we are still seeing failures with the GitHub API. This continues to prevent our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.May 31, 05:45 PDT Monitoring - GitHub have declared a major outage. This is currently preventing many builds from starting and also prevents our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.

Last Update: A few months ago

Major GitHub outage affecting builds and the CircleCI webapp

May 31, 06:08 PDT Update - GitHub have declared the outage resolved and we are starting to see incoming GitHub hooks. Builds are being triggered again. However we are still seeing failures with the GitHub API. This continues to prevent our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.May 31, 05:45 PDT Monitoring - GitHub have declared a major outage. This is currently preventing many builds from starting and also prevents our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.

Last Update: A few months ago

Major GitHub outage affecting builds and the CircleCI webapp

May 31, 05:45 PDT Monitoring - GitHub have declared a major outage. This is currently preventing many builds from starting and also prevents our webapp from fetching data from GitHub. We are monitoring the situation and will ensure sufficient capacity for when their service resumes normal operations.

Last Update: A few months ago

2.0 Build System Queue Event

May 30, 10:33 PDT Resolved - The queue has been processed and all builds should begin as soon as they are received now.May 30, 10:18 PDT Identified - We are experiencing large volume queuing on our 2.0 build system and are currently working to process incoming builds. Builds are still running, it just may take some time for new builds to start running.

Last Update: A few months ago

2.0 Build System Queue Event

May 30, 10:18 PDT Identified - We are experiencing large volume queuing on our 2.0 build system and are currently working to process incoming builds. Builds are still running, it just may take some time for new builds to start running.

Last Update: A few months ago

MacOS build system outage

May 26, 12:58 PDT Resolved - The MacOS build queue is drained, thanks for your patience.May 26, 09:12 PDT Monitoring - Builds are running correctly again and we are monitoring the situation. New MacOS builds may continue to queue as we process the existing build queue.May 26, 09:03 PDT Investigating - We are seeing a drop in the number of available builders for our MacOS build fleet and are investigating the cause. Builds may queue in the meantime.

Last Update: A few months ago

MacOS build system outage

May 26, 09:12 PDT Monitoring - Builds are running correctly again and we are monitoring the situation. New MacOS builds may continue to queue as we process the existing build queue.May 26, 09:03 PDT Investigating - We are seeing a drop in the number of available builders for our MacOS build fleet and are investigating the cause. Builds may queue in the meantime.

Last Update: A few months ago

MacOS build system outage

May 26, 09:03 PDT Investigating - We are seeing a drop in the number of available builders for our MacOS build fleet and are investigating the cause. Builds may queue in the meantime.

Last Update: A few months ago

MacOS build system outage

May 26, 09:03 PDT Investigating - We are seeing a drop in the number of available builders for our MacOS build fleet and are investigating the cause. Builds may queue in the meantime.

Last Update: A few months ago

Trusty fleet build queue

May 21, 12:40 PDT Resolved - During the weekend some Trusty build jobs were starved of resources and unable to be queued

Last Update: A few months ago

Real-time Build Update Outage

May 10, 03:21 PDT Resolved - This incident has been resolved.May 10, 02:13 PDT Monitoring - Fix is fully rolled out, monitoring to ensure real-time build updates are working as expected.May 10, 00:58 PDT Update - We identified a possible cause for the real-time build updates issue and currently working on the fix.May 9, 22:50 PDT Investigating - Our real-time build updates are currently not processing correctly so individual builds are not pushing their output to our UI. Builds are still processing normally but you may need to refresh the build page to view updated output.

Last Update: A few months ago

Real-time Build Update Outage

May 10, 02:13 PDT Monitoring - Fix is fully rolled out, monitoring to ensure real-time build updates are working as expected.May 10, 00:58 PDT Update - We identified a possible cause for the real-time build updates issue and currently working on the fix.May 9, 22:50 PDT Investigating - Our real-time build updates are currently not processing correctly so individual builds are not pushing their output to our UI. Builds are still processing normally but you may need to refresh the build page to view updated output.

Last Update: A few months ago

Real-time Build Update Outage

May 10, 00:58 PDT Update - We identified a possible cause for the real-time build updates issue and currently working on the fix.May 9, 22:50 PDT Investigating - Our real-time build updates are currently not processing correctly so individual builds are not pushing their output to our UI. Builds are still processing normally but you may need to refresh the build page to view updated output.

Last Update: A few months ago

Real-time Build Update Outage

May 9, 22:50 PDT Investigating - Our real-time build updates are currently not processing correctly so individual builds are not pushing their output to our UI. Builds are still processing normally but you may need to refresh the build page to view updated output.

Last Update: A few months ago

Docker/VM-creation failure

Apr 26, 13:26 PDT Resolved - We're no longer seeing any Docker/VM issues on 2.0. Please reach out to support@circleci.com if you have any issues. Thanks for your patience!Apr 26, 13:11 PDT Monitoring - Our engineers have identified the issue and implemented a fix. We'll continue to monitor the situation.Apr 26, 12:58 PDT Investigating - We are currently investigating issues with Docker/VM-creation on 2.0; next update in 20 minutes.

Last Update: A few months ago

Docker/VM-creation failure

Apr 26, 13:11 PDT Monitoring - Our engineers have identified the issue and implemented a fix. We'll continue to monitor the situation.Apr 26, 12:58 PDT Investigating - We are currently investigating issues with Docker/VM-creation on 2.0; next update in 20 minutes.

Last Update: A few months ago

Docker/VM-creation failure

Apr 26, 12:58 PDT Investigating - We are currently investigating issues with Docker/VM-creation on 2.0; next update in 20 minutes.

Last Update: A few months ago

Temporary AWS Keys Appeared In Artifact Logs

Apr 21, 10:42 PDT Resolved - AWS credentials that are valid for 30 minutes were revealed in build output during the validity window for users who already had access to the build.

Last Update: A few months ago

Build queuing on 2.0

Apr 20, 10:02 PDT Resolved - We have identified the issue and implemented a solution; builds should be running on 2.0 without issue. Please contact our awesome support team at support@circleci.com if you experience any issues. Thanks for your patience!Apr 20, 09:48 PDT Investigating - We are currently investigating queued builds on 2.0; we will update again in 20 minutes.

Last Update: A few months ago

Build queuing on 2.0

Apr 20, 09:48 PDT Investigating - We are currently investigating queued builds on 2.0; we will update again in 20 minutes.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 19:53 PDT Resolved - The CircleCI 2.0 build environment is fully operational. Please contact support if you have any remaining issues.Apr 8, 19:18 PDT Monitoring - We are processing builds with the CircleCI 2.0 fleet again, we will continue to monitor the recovery and will update in 20 minutes.Apr 8, 18:56 PDT Update - We are doing a controlled restart of the services and checking the node environment before spinning up additional resources. We'll update again in 30 minutes.Apr 8, 18:13 PDT Update - We're continuing to validate our reconfiguration and working on getting the cluster's nodes to converge. Next update in 30 minutes. Thank you for your patience.Apr 8, 17:37 PDT Update - We have restored data on the failed system and are validating reconfiguration before bringing the cluster back online. Will update again within 30 minutes.Apr 8, 16:59 PDT Update - "Our engineers are working on restoring data to a failed system on 2.0. We'll update again in 30 minutes. Thanks for your patience."Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 19:18 PDT Monitoring - We are processing builds with the CircleCI 2.0 fleet again, we will continue to monitor the recovery and will update in 20 minutes.Apr 8, 18:56 PDT Update - We are doing a controlled restart of the services and checking the node environment before spinning up additional resources. We'll update again in 30 minutes.Apr 8, 18:13 PDT Update - We're continuing to validate our reconfiguration and working on getting the cluster's nodes to converge. Next update in 30 minutes. Thank you for your patience.Apr 8, 17:37 PDT Update - We have restored data on the failed system and are validating reconfiguration before bringing the cluster back online. Will update again within 30 minutes.Apr 8, 16:59 PDT Update - "Our engineers are working on restoring data to a failed system on 2.0. We'll update again in 30 minutes. Thanks for your patience."Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 18:56 PDT Update - We are doing a controlled restart of the services and checking the node environment before spinning up additional resources. We'll update again in 30 minutes.Apr 8, 18:13 PDT Update - We're continuing to validate our reconfiguration and working on getting the cluster's nodes to converge. Next update in 30 minutes. Thank you for your patience.Apr 8, 17:37 PDT Update - We have restored data on the failed system and are validating reconfiguration before bringing the cluster back online. Will update again within 30 minutes.Apr 8, 16:59 PDT Update - "Our engineers are working on restoring data to a failed system on 2.0. We'll update again in 30 minutes. Thanks for your patience."Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 18:13 PDT Update - We're continuing to validate our reconfiguration and working on getting the cluster's nodes to converge. Next update in 30 minutes. Thank you for your patience.Apr 8, 17:37 PDT Update - We have restored data on the failed system and are validating reconfiguration before bringing the cluster back online. Will update again within 30 minutes.Apr 8, 16:59 PDT Update - "Our engineers are working on restoring data to a failed system on 2.0. We'll update again in 30 minutes. Thanks for your patience."Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 17:37 PDT Update - We have restored data on the failed system and are validating reconfiguration before bringing the cluster back online. Will update again within 30 minutes.Apr 8, 16:59 PDT Update - "Our engineers are working on restoring data to a failed system on 2.0. We'll update again in 30 minutes. Thanks for your patience."Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 16:59 PDT Update - "Our engineers are working on restoring data to a failed system on 2.0. We'll update again in 30 minutes. Thanks for your patience."Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 15:25 PDT Update - Nothing new to report; we're still working hard on getting 2.0 fully operational. We'll update again in one hour.Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 14:52 PDT Update - Our engineers continue to work on resolving issues with our 2.0 build system. We'll update again in 30 minutes. Thank you for your patience.Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 14:08 PDT Update - We are continuing to work on implementing a fix and will update this status again in 30 minutes.Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 13:27 PDT Update - We continue to work on resolving this incident and will provide updates as they become available.Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 12:53 PDT Update - We are still working on implementing a solution to this outage and will continue to keep you updated on the status of this incident.Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 12:24 PDT Identified - We've identified the issue affecting our 2.0 build environment and are now working on a fix.Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

CircleCI 2.0 Build Environment Outage

Apr 8, 11:57 PDT Investigating - We've been alerted to an outage on our 2.0 build system and are investigating the cause. We will keep you informed when we have further information.

Last Update: A few months ago

Mobile/OS X Build Fleet run-queue

Mar 26, 21:25 PDT Resolved - The Mobile/OS X build system has recovered at this time. If you see any further issues, please contact our support department.Mar 26, 21:03 PDT Monitoring - We have identified an issue with our datacenter provider and a fix has been implemented. We are currently monitoring and builds should begin to run soon.Mar 26, 20:53 PDT Investigating - Our OS X fleet is currently experiencing issues and causing builds to queue. We're investigating and will update as we get more information.

Last Update: A few months ago

OS X Fleet Power Maintenance

Mar 27, 21:00 PDT Completed - The scheduled maintenance has been completed.Mar 27, 20:00 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 27, 15:08 PDT Scheduled - Our data center vendor is performing a power supply maintenance that will impact our OS X build fleet.

Last Update: A few months ago

OS X Build Environment Inference Outage

Mar 28, 08:18 PDT Resolved - Builds are running successfully and we have recovered from the build queues. If you see any further issues please reach out to support.Mar 28, 07:42 PDT Monitoring - We have deployed a fix and are monitoring new builds. There is currently a build queue so it may take a few minutes before your builds run.Mar 28, 07:11 PDT Identified - We have found a problem with our inference system in our OS X build environment and are working to correct the problem. You may expect interrupted or retried builds while we fix the issue.

Last Update: A few months ago

Trusty Queuing

Mar 28, 20:32 PDT Resolved - Ubuntu-14.04 "Trusty" fleet is operational again. Thank you for your patience.Mar 28, 20:26 PDT Monitoring - We have deployed a fix and the queue has drained. We will continue to monitor.Mar 28, 20:18 PDT Investigating - Ubuntu-14.04 "Trusty" fleet is queueing. We are investigating the cause.

Last Update: A few months ago

Dashboard UI and Builds Degraded

Mar 22, 03:16 PDT Resolved - This incident has been corrected and our system has fully recovered. If you have any further issues please do contact support.Mar 22, 02:53 PDT Monitoring - Builds are now dispatching normally and the UI is active again. We will monitor the system for further disruptions.Mar 22, 02:37 PDT Identified - We have identified the cause of this incident and have begun to run new buildsMar 22, 02:21 PDT Update - The investigation into this outage is ongoing. We will provide status updates as this situation progresses.Mar 22, 01:51 PDT Update - We are continuing to investigate the cause of this issue.Mar 22, 01:18 PDT Investigating - We are detecting increases in the Circle UI and build system error rates and are investigating the cause now.

Last Update: A few months ago

Dashboard UI and Builds Degraded

Mar 22, 02:53 PDT Monitoring - Builds are now dispatching normally and the UI is active again. We will monitor the system for further disruptions.Mar 22, 02:37 PDT Identified - We have identified the cause of this incident and have begun to run new buildsMar 22, 02:21 PDT Update - The investigation into this outage is ongoing. We will provide status updates as this situation progresses.Mar 22, 01:51 PDT Update - We are continuing to investigate the cause of this issue.Mar 22, 01:18 PDT Investigating - We are detecting increases in the Circle UI and build system error rates and are investigating the cause now.

Last Update: A few months ago

Dashboard UI and Builds Degraded

Mar 22, 02:37 PDT Identified - We have identified the cause of this incident and have begun to run new buildsMar 22, 02:21 PDT Update - The investigation into this outage is ongoing. We will provide status updates as this situation progresses.Mar 22, 01:51 PDT Update - We are continuing to investigate the cause of this issue.Mar 22, 01:18 PDT Investigating - We are detecting increases in the Circle UI and build system error rates and are investigating the cause now.

Last Update: A few months ago

Dashboard UI and Builds Degraded

Mar 22, 02:21 PDT Update - The investigation into this outage is ongoing. We will provide status updates as this situation progresses.Mar 22, 01:51 PDT Update - We are continuing to investigate the cause of this issue.Mar 22, 01:18 PDT Investigating - We are detecting increases in the Circle UI and build system error rates and are investigating the cause now.

Last Update: A few months ago

Dashboard UI and Builds Degraded

Mar 22, 01:51 PDT Update - We are continuing to investigate the cause of this issue.Mar 22, 01:18 PDT Investigating - We are detecting increases in the Circle UI and build system error rates and are investigating the cause now.

Last Update: A few months ago

Dashboard UI and Builds Degraded

Mar 22, 01:18 PDT Investigating - We are detecting increases in the Circle UI and build system error rates and are investigating the cause now.

Last Update: A few months ago

New User Signups

Mar 17, 11:21 PDT Resolved - The issue has been resolved, thank you for your patience with us and your support.Mar 17, 10:49 PDT Monitoring - A fix for this issue has been deployed, we are confirming that is has solved the issue and monitoringMar 17, 10:17 PDT Update - We are working on deploying a fix and will update when that is liveMar 17, 09:54 PDT Identified - The current workaround is for new users to navigate to https://circleci.com/add-projects until our fix is deployedMar 17, 09:52 PDT Investigating - Currently seeing an issue with new user signup and are looking into the root cause

Last Update: A few months ago

New User Signups

Mar 17, 10:49 PDT Monitoring - A fix for this issue has been deployed, we are confirming that is has solved the issue and monitoringMar 17, 10:17 PDT Update - We are working on deploying a fix and will update when that is liveMar 17, 09:54 PDT Identified - The current workaround is for new users to navigate to https://circleci.com/add-projects until our fix is deployedMar 17, 09:52 PDT Investigating - Currently seeing an issue with new user signup and are looking into the root cause

Last Update: A few months ago

New User Signups

Mar 17, 10:17 PDT Update - We are working on deploying a fix and will update when that is liveMar 17, 09:54 PDT Identified - The current workaround is for new users to navigate to https://circleci.com/add-projects until our fix is deployedMar 17, 09:52 PDT Investigating - Currently seeing an issue with new user signup and are looking into the root cause

Last Update: A few months ago

New User Signups

Mar 17, 09:54 PDT Identified - The current workaround is for new users to navigate to https://circleci.com/add-projects until our fix is deployedMar 17, 09:52 PDT Investigating - Currently seeing an issue with new user signup and are looking into the root cause

Last Update: A few months ago

New User Signups

Mar 17, 09:52 PDT Investigating - Currently seeing an issue with new user signup and are looking into the root cause

Last Update: A few months ago

Trusty build fleet queues

Mar 16, 08:59 PDT Resolved - Trusty fleet build queue

Last Update: A few months ago

NVM timeouts impacting builds

Mar 16, 08:55 PDT Postmortem - We received reports of builds timing out when trying to set the Node.js version. This appears to have been caused by nvm not being able to obtain the complete list of Node.js version from the Node servers. The issue self-resolved while we were investigating where the connectivity issues were coming from.Mar 16, 08:55 PDT Resolved - We received reports of builds timing out when trying to set the Node.js version

Last Update: A few months ago

OS X builds are queueing

Mar 7, 08:49 PSTResolved - This incident has been resolved.Mar 7, 08:12 PSTMonitoring - The queue is now empty and builds are being dispatched correctly. We're continuing to monitor the situation.Mar 7, 07:30 PSTIdentified - Builds are being dispatched again, but we're still investigating the underlying causeMar 7, 06:55 PSTInvestigating - OS X builds are queueing. We are investigating the reason for that and will update in 30 mins.

Last Update: A few months ago

OS X builds are queueing

Mar 7, 08:12 PSTMonitoring - The queue is now empty and builds are being dispatched correctly. We're continuing to monitor the situation.Mar 7, 07:30 PSTIdentified - Builds are being dispatched again, but we're still investigating the underlying causeMar 7, 06:55 PSTInvestigating - OS X builds are queueing. We are investigating the reason for that and will update in 30 mins.

Last Update: A few months ago

OS X builds are queueing

Mar 7, 07:30 PSTIdentified - Builds are being dispatched again, but we're still investigating the underlying causeMar 7, 06:55 PSTInvestigating - OS X builds are queueing. We are investigating the reason for that and will update in 30 mins.

Last Update: A few months ago

OS X builds are queueing

Mar 7, 06:55 PSTInvestigating - OS X builds are queueing. We are investigating the reason for that and will update in 30 mins.

Last Update: A few months ago

Slow loading times for dashboard and build page

Mar 1, 14:16 PSTResolved - Our engineers have implemented a fix and the issue is resolved. Please don't hesitate to reach out to support@circleci.com if you have any issues. Thank you for your patience and continued support.Mar 1, 13:53 PSTMonitoring - A fix has been implemented and we are monitoring the results.Mar 1, 13:46 PSTIdentified - Our engineers have identified the issue and are implementing a fix. Next update again in 30 minutes.Mar 1, 13:15 PSTInvestigating - We're currently investigating reports of slow loading times for the dashboard and build pages. We'll update again in 30 minutes.

Last Update: A few months ago

Slow loading times for dashboard and build page

Mar 1, 13:53 PSTMonitoring - A fix has been implemented and we are monitoring the results.Mar 1, 13:46 PSTIdentified - Our engineers have identified the issue and are implementing a fix. Next update again in 30 minutes.Mar 1, 13:15 PSTInvestigating - We're currently investigating reports of slow loading times for the dashboard and build pages. We'll update again in 30 minutes.

Last Update: A few months ago

Slow loading times for dashboard and build page

Mar 1, 13:46 PSTIdentified - Our engineers have identified the issue and are implementing a fix. Next update again in 30 minutes.Mar 1, 13:15 PSTInvestigating - We're currently investigating reports of slow loading times for the dashboard and build pages. We'll update again in 30 minutes.

Last Update: A few months ago

Slow loading times for dashboard and build page

Mar 1, 13:15 PSTInvestigating - We're currently investigating reports of slow loading times for the dashboard and build pages. We'll update again in 30 minutes.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 17:04 PSTResolved - We have seen no remaining issues and as such we are marking this incident as resolved. Thank you for your patience and continued support.Feb 28, 16:44 PSTUpdate - The backlog has been processed and we are going to monitor the situation for another 20 minutes to ensure that we are in the clear.Feb 28, 16:28 PSTUpdate - The backlog of queued builds continues as we maintain a higher level than normal of resources. We will update again in 30 minutes.Feb 28, 15:50 PSTUpdate - The backlog of queued builds is being processed. We've brought additional resources online to meet demand and will continue to monitor. We anticipate service to return to normal soon. Next (and hopefully last) update in 30 minutes.Feb 28, 15:16 PSTMonitoring - AWS S3 is operating normally again. We will continue to bring additional resources online to process the backlog of builds and continue to monitor the situation closely. We'll update again in 30 minutes.Feb 28, 14:15 PSTUpdate - We are continuing to work on the backlog of builds while monitoring the AWS S3 status. We will update again in 30 minutes.Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 16:44 PSTUpdate - The backlog has been processed and we are going to monitor the situation for another 20 minutes to ensure that we are in the clear.Feb 28, 16:28 PSTUpdate - The backlog of queued builds continues as we maintain a higher level than normal of resources. We will update again in 30 minutes.Feb 28, 15:50 PSTUpdate - The backlog of queued builds is being processed. We've brought additional resources online to meet demand and will continue to monitor. We anticipate service to return to normal soon. Next (and hopefully last) update in 30 minutes.Feb 28, 15:16 PSTMonitoring - AWS S3 is operating normally again. We will continue to bring additional resources online to process the backlog of builds and continue to monitor the situation closely. We'll update again in 30 minutes.Feb 28, 14:15 PSTUpdate - We are continuing to work on the backlog of builds while monitoring the AWS S3 status. We will update again in 30 minutes.Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 16:28 PSTUpdate - The backlog of queued builds continues as we maintain a higher level than normal of resources. We will update again in 30 minutes.Feb 28, 15:50 PSTUpdate - The backlog of queued builds is being processed. We've brought additional resources online to meet demand and will continue to monitor. We anticipate service to return to normal soon. Next (and hopefully last) update in 30 minutes.Feb 28, 15:16 PSTMonitoring - AWS S3 is operating normally again. We will continue to bring additional resources online to process the backlog of builds and continue to monitor the situation closely. We'll update again in 30 minutes.Feb 28, 14:15 PSTUpdate - We are continuing to work on the backlog of builds while monitoring the AWS S3 status. We will update again in 30 minutes.Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 15:50 PSTUpdate - The backlog of queued builds is being processed. We've brought additional resources online to meet demand and will continue to monitor. We anticipate service to return to normal soon. Next (and hopefully last) update in 30 minutes.Feb 28, 15:16 PSTMonitoring - AWS S3 is operating normally again. We will continue to bring additional resources online to process the backlog of builds and continue to monitor the situation closely. We'll update again in 30 minutes.Feb 28, 14:15 PSTUpdate - We are continuing to work on the backlog of builds while monitoring the AWS S3 status. We will update again in 30 minutes.Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 15:16 PSTMonitoring - AWS S3 is operating normally again. We will continue to bring additional resources online to process the backlog of builds and continue to monitor the situation closely. We'll update again in 30 minutes.Feb 28, 14:15 PSTUpdate - We are continuing to work on the backlog of builds while monitoring the AWS S3 status. We will update again in 30 minutes.Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 14:15 PSTUpdate - We are continuing to work on the backlog of builds while monitoring the AWS S3 status. We will update again in 30 minutes.Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 13:42 PSTUpdate - We're continuing to see improvement, however our systems are still impacted as a result of the S3 issues. Next update in 30 minutes. Thanks for your patience as we continue to monitor the situation.Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 13:02 PSTUpdate - We are starting to see signs of improvement. AWS expects to see lower error rates within the hour. Will update again in 30 minutes.Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 12:13 PSTUpdate - AWS believes they have identified the cause of the S3 issue and are working hard on implementing a fix. We'll update again in 30 minutes.Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 11:31 PSTUpdate - The AWS S3 availability issue persists. We'll continue to monitor and update again in 30 minutes.Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 11:14 PSTUpdate - We're continuing to experience issues with AWS S3 and are monitoring the situation closely. We'll update again in 20 minutes. Thank you for your patience.Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 10:39 PSTUpdate - The issue with AWS S3 is still ongoing, we are working on keeping our fleet ready to respond when the event is overFeb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 10:19 PSTIdentified - We have identified the issue with our upstream providers and are monitoring the situationFeb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 09:53 PSTUpdate - We are seeing widespread issues with AWS and GitHub which are impacting our ability to handle buildsFeb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Issues with S3 and Job Starts

Feb 28, 09:51 PSTInvestigating - We are currently investigating this issue.

Last Update: A few months ago

Dashboard UI degraded

Feb 10, 02:40 PSTResolved - We have seen no other issues with the dashboard. Thank you for your patience with this.Feb 10, 02:16 PSTMonitoring - Our downstream API responses have returned to normal and the UI is responsive. We will monitor for a bit to be sure.Feb 10, 01:50 PSTInvestigating - Dashboard and builds list are slow to load, we are investigating the issue and will update shortly.

Last Update: A few months ago

Dashboard UI degraded

Feb 10, 02:16 PSTMonitoring - Our downstream API responses have returned to normal and the UI is responsive. We will monitor for a bit to be sure.Feb 10, 01:50 PSTInvestigating - Dashboard and builds list are slow to load, we are investigating the issue and will update shortly.

Last Update: A few months ago

Dashboard UI degraded

Feb 10, 01:50 PSTInvestigating - Dashboard and builds list are slow to load, we are investigating the issue and will update shortly.

Last Update: A few months ago

GitHub Hook are delayed

Feb 2, 12:15 PSTResolved - GitHub has reported that the webhook delay issue is resolved. Thank you for your patience.Feb 2, 11:59 PSTMonitoring - GitHub is processing the hook delivery backlog and we are scaling to match the surgeFeb 2, 11:52 PSTIdentified - GitHub Status is reporting a delay in delivery of webhooks and this will cause builds to be missed or slow to start.

Last Update: A few months ago

GitHub Hook are delayed

Feb 2, 11:59 PSTMonitoring - GitHub is processing the hook delivery backlog and we are scaling to match the surgeFeb 2, 11:52 PSTIdentified - GitHub Status is reporting a delay in delivery of webhooks and this will cause builds to be missed or slow to start.

Last Update: A few months ago

GitHub Hook are delayed

Feb 2, 11:52 PSTIdentified - GitHub Status is reporting a delay in delivery of webhooks and this will cause builds to be missed or slow to start.

Last Update: A few months ago

Slow Github Webhooks

Jan 30, 17:54 PSTResolved - We're no longer experiencing issues with Github's API. Please reach out to support@circleci.com if you have any issues.Jan 30, 17:42 PSTUpdate - We're still seeing slow response times from Github's API, however there are signs of improvement. We'll continue to monitor and update again in 20 minutes.Jan 30, 17:13 PSTMonitoring - We're continuing to monitor slow response times from Github's API, we'll update again in 20 minutes.Jan 30, 16:53 PSTInvestigating - We are noticing slow response times from GitHub webhooks. We are monitoring and will update again in 20 minutes.

Last Update: A few months ago

Slow Github Webhooks

Jan 30, 17:42 PSTUpdate - We're still seeing slow response times from Github's API, however there are signs of improvement. We'll continue to monitor and update again in 20 minutes.Jan 30, 17:13 PSTMonitoring - We're continuing to monitor slow response times from Github's API, we'll update again in 20 minutes.Jan 30, 16:53 PSTInvestigating - We are noticing slow response times from GitHub webhooks. We are monitoring and will update again in 20 minutes.

Last Update: A few months ago

Slow Github Webhooks

Jan 30, 17:13 PSTMonitoring - We're continuing to monitor slow response times from Github's API, we'll update again in 20 minutes.Jan 30, 16:53 PSTInvestigating - We are noticing slow response times from GitHub webhooks. We are monitoring and will update again in 20 minutes.

Last Update: A few months ago

Slow Github Webhooks

Jan 30, 16:53 PSTInvestigating - We are noticing slow response times from GitHub webhooks. We are monitoring and will update again in 20 minutes.

Last Update: A few months ago

Website is slow to load

Jan 27, 11:12 PSTResolved - This incident has been resolved.Jan 27, 10:01 PSTMonitoring - The website response is back to normal, we will continue to monitor to verify. Thank youJan 27, 09:42 PSTInvestigating - We are looking into reports of the site being slow to load

Last Update: A few months ago

Website is slow to load

Jan 27, 10:01 PSTMonitoring - The website response is back to normal, we will continue to monitor to verify. Thank youJan 27, 09:42 PSTInvestigating - We are looking into reports of the site being slow to load

Last Update: A few months ago

Website is slow to load

Jan 27, 09:42 PSTInvestigating - We are looking into reports of the site being slow to load

Last Update: A few months ago

Drop in incoming push notifications from GitHub

Jan 26, 08:51 PSTResolved - We are no longer seeing any issues related to GitHub. Please contact support if you have any further issues.Jan 26, 08:44 PSTUpdate - GitHub hooks and events are coming in again. We will be monitoring the situation.Jan 26, 08:25 PSTMonitoring - We're seeing a drop in incoming notifications from GitHub. We're monitoring the situation and making sure we have capacity for when it is resolved.

Last Update: A few months ago

Drop in incoming push notifications from GitHub

Jan 26, 08:44 PSTUpdate - GitHub hooks and events are coming in again. We will be monitoring the situation.Jan 26, 08:25 PSTMonitoring - We're seeing a drop in incoming notifications from GitHub. We're monitoring the situation and making sure we have capacity for when it is resolved.

Last Update: A few months ago

Drop in incoming push notifications from GitHub

Jan 26, 08:25 PSTMonitoring - We're seeing a drop in incoming notifications from GitHub. We're monitoring the situation and making sure we have capacity for when it is resolved.

Last Update: A few months ago

API and non-CircleCI account builds broken

Jan 19, 17:38 PSTResolved - This incident has been resolved.Jan 19, 17:13 PSTMonitoring - We've resolved the issue with broken builds from API and non-CircleCI accounts, and we'll continue to monitor closely. If you have any builds marked as Not Running, hitting the rebuild button will fix the issue. If you see anything amiss, please reach out to support@circleci.com.Jan 19, 16:41 PSTInvestigating - Builds triggered via API, or pushed by users that don't have a CircleCI account are currently broken. We have determined the cause, and are pushing a fix.

Last Update: A few months ago

API and non-CircleCI account builds broken

Jan 19, 17:13 PSTMonitoring - We've resolved the issue with broken builds from API and non-CircleCI accounts, and we'll continue to monitor closely. If you have any builds marked as Not Running, hitting the rebuild button will fix the issue. If you see anything amiss, please reach out to support@circleci.com.Jan 19, 16:41 PSTInvestigating - Builds triggered via API, or pushed by users that don't have a CircleCI account are currently broken. We have determined the cause, and are pushing a fix.

Last Update: A few months ago

API and non-CircleCI account builds broken

Jan 19, 16:41 PSTInvestigating - Builds triggered via API, or pushed by users that don't have a CircleCI account are currently broken. We have determined the cause, and are pushing a fix.

Last Update: A few months ago

Github Outage

Jan 13, 09:27 PSTResolved - We are no longer seeing any issues related to GitHub. Please do contact support if you have any further issues.Jan 13, 08:43 PSTMonitoring - GitHub hooks and events are flowing. We will be monitoring the situation.Jan 13, 08:27 PSTIdentified - GitHub is reporting major outage. We will be unable to process many requests and our ability to check permissions is severly limited.

Last Update: A few months ago

Github Outage

Jan 13, 08:43 PSTMonitoring - GitHub hooks and events are flowing. We will be monitoring the situation.Jan 13, 08:27 PSTIdentified - GitHub is reporting major outage. We will be unable to process many requests and our ability to check permissions is severly limited.

Last Update: A few months ago

Github Outage

Jan 13, 08:27 PSTIdentified - GitHub is reporting major outage. We will be unable to process many requests and our ability to check permissions is severly limited.

Last Update: A few months ago

Users currently unable to follow projects

Jan 11, 17:56 PSTResolved - The follow projects issue is resolved. We're continuing to monitor closely. If you're seeing anything that looks awry, please reach out to support@circleci.com and our support engineers will dig in.Jan 11, 17:40 PSTMonitoring - We've rolled out the fix to the follow projects issue and are closely monitoring. Thanks for your patience, and if you see anything that looks amiss, please reach out to our support engineers at support@circleci.com.Jan 11, 17:09 PSTIdentified - We’ve identified the root issue as an update to the library used to communicate with GitHub. We’ve identified a fix and are in the process of rolling it out. Thanks for your patience.Jan 11, 16:57 PSTInvestigating - We're seeing an issue with users and their ability to follow projects. This will primarily affect starting projects that have never built on CircleCI before. We have are currently working on a fix and will update shortly.

Last Update: A few months ago

Users currently unable to follow projects

Jan 11, 17:40 PSTMonitoring - We've rolled out the fix to the follow projects issue and are closely monitoring. Thanks for your patience, and if you see anything that looks amiss, please reach out to our support engineers at support@circleci.com.Jan 11, 17:09 PSTIdentified - We’ve identified the root issue as an update to the library used to communicate with GitHub. We’ve identified a fix and are in the process of rolling it out. Thanks for your patience.Jan 11, 16:57 PSTInvestigating - We're seeing an issue with users and their ability to follow projects. This will primarily affect starting projects that have never built on CircleCI before. We have are currently working on a fix and will update shortly.

Last Update: A few months ago

Users currently unable to follow projects

Jan 11, 17:09 PSTIdentified - We’ve identified the root issue as an update to the library used to communicate with GitHub. We’ve identified a fix and are in the process of rolling it out. Thanks for your patience.Jan 11, 16:57 PSTInvestigating - We're seeing an issue with users and their ability to follow projects. This will primarily affect starting projects that have never built on CircleCI before. We have are currently working on a fix and will update shortly.

Last Update: A few months ago

Users currently unable to follow projects

Jan 11, 16:57 PSTInvestigating - We're seeing an issue with users and their ability to follow projects. This will primarily affect starting projects that have never built on CircleCI before. We have are currently working on a fix and will update shortly.

Last Update: A few months ago

TLS Cert Update Failed

Jan 11, 15:08 PSTResolved - TLS Certificate reverted and we have had no reports for 20 minutes, marking as resolved.Jan 11, 14:45 PSTIdentified - TLS Certificate update failed and has been reverted, we are monitoring to ensure the rollback worked.

Last Update: A few months ago

TLS Cert Update Failed

Jan 11, 14:45 PSTIdentified - TLS Certificate update failed and has been reverted, we are monitoring to ensure the rollback worked.

Last Update: A few months ago

Minor queuing due to AWS API Errors

Jan 10, 14:07 PSTResolved - We're no longer seeing any queued builds or issues with the AWS API. Please don't hesitate to reach out to support@circleci.com if you experience any issues.Jan 10, 13:22 PSTMonitoring - We are witnessing infrequent build queueing due to errors from the AWS API. Only seeing short, intermittent delays for now, but we are monitoring closely.

Last Update: A few months ago

Minor queuing due to AWS API Errors

Jan 10, 13:22 PSTMonitoring - We are witnessing infrequent build queueing due to errors from the AWS API. Only seeing short, intermittent delays for now, but we are monitoring closely.

Last Update: A few months ago

Artifacts temporarily unavailable for download

Dec 25, 09:47 PSTResolved - Everything continues to operate normally.Dec 25, 09:10 PSTMonitoring - Fix is deployed and artifacts are downloading again. Monitoring closely.Dec 25, 08:51 PSTUpdate - We are currently deploying an expected fix. Will update again within 30 mins.Dec 25, 08:26 PSTIdentified - Download of artifacts is currently not working. The issue has been identified and we are currently working on a fix. Will update in 30 mins.

Last Update: A few months ago

Artifacts temporarily unavailable for download

Dec 25, 09:10 PSTMonitoring - Fix is deployed and artifacts are downloading again. Monitoring closely.Dec 25, 08:51 PSTUpdate - We are currently deploying an expected fix. Will update again within 30 mins.Dec 25, 08:26 PSTIdentified - Download of artifacts is currently not working. The issue has been identified and we are currently working on a fix. Will update in 30 mins.

Last Update: A few months ago

Artifacts temporarily unavailable for download

Dec 25, 08:51 PSTUpdate - We are currently deploying an expected fix. Will update again within 30 mins.Dec 25, 08:26 PSTIdentified - Download of artifacts is currently not working. The issue has been identified and we are currently working on a fix. Will update in 30 mins.

Last Update: A few months ago

Artifacts temporarily unavailable for download

Dec 25, 08:26 PSTIdentified - Download of artifacts is currently not working. The issue has been identified and we are currently working on a fix. Will update in 30 mins.

Last Update: A few months ago

Artifacts temporarily unavailable for download

Dec 25, 08:26 PSTIdentified - Download of artifacts is currently not working. The issue has been identified and we are currently working on a fix. Will update in 30 mins.

Last Update: A few months ago

Slack slow to respond

Dec 22, 13:30 PSTResolved - This incident has been resolved.Dec 22, 08:57 PSTMonitoring - Slack notifications for build messages may be delayed

Last Update: A few months ago

Slack slow to respond

Dec 22, 08:57 PSTMonitoring - Slack notifications for build messages may be delayed

Last Update: A few months ago

Cryptocurrency mining in PRs from forks of open source projects

Between Sunday, December 4th, and Tuesday, December 7th, we saw an increased number of builds that were using CircleCI to mine cryptocurrencies. These builds were triggered by PRs on a multitude of open source repos, and were invisible on GitHub's UI, which raised the question of a potential security breach on CircleCI or GitHub as the commits were visible in the users’ repositories when pulling the PR refspecs. After reaching out to GitHub's Security team, we discovered that the PRs were invisible in the UI as they had been marked as spammy, but they could still be pushed to and would generate web hooks.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 09:24 PSTResolved - The NodeJS issue has been resolved by their Infra team. We appreciate everyone's patience and please do reach out to support if needed.Dec 9, 09:00 PSTMonitoring - The NodeJS infra team has reported the issue as solved, we will monitor and update in 20 minutes - thank you for your patience in this matterDec 9, 08:36 PSTUpdate - NodeJS fix is ongoing, we will continue to monitor and update as neededDec 9, 08:12 PSTUpdate - The NodeJS team have reported that a fix for the issue is being deployed, we will continue to monitor and update as neededDec 9, 08:02 PSTUpdate - We are monitoring the NodeJS situation (see https://github.com/nodejs/build/issues/562 ) and will update when the upstream status changesDec 9, 07:33 PSTIdentified - The nodeJS team has identified the issues and are working with DO to resolve it (see more here: https://github.com/nodejs/build/issues/562#issuecomment-26604062).Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 09:00 PSTMonitoring - The NodeJS infra team has reported the issue as solved, we will monitor and update in 20 minutes - thank you for your patience in this matterDec 9, 08:36 PSTUpdate - NodeJS fix is ongoing, we will continue to monitor and update as neededDec 9, 08:12 PSTUpdate - The NodeJS team have reported that a fix for the issue is being deployed, we will continue to monitor and update as neededDec 9, 08:02 PSTUpdate - We are monitoring the NodeJS situation (see https://github.com/nodejs/build/issues/562 ) and will update when the upstream status changesDec 9, 07:33 PSTIdentified - The nodeJS team has identified the issues and are working with DO to resolve it (see more here: https://github.com/nodejs/build/issues/562#issuecomment-26604062).Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 08:36 PSTUpdate - NodeJS fix is ongoing, we will continue to monitor and update as neededDec 9, 08:12 PSTUpdate - The NodeJS team have reported that a fix for the issue is being deployed, we will continue to monitor and update as neededDec 9, 08:02 PSTUpdate - We are monitoring the NodeJS situation (see https://github.com/nodejs/build/issues/562 ) and will update when the upstream status changesDec 9, 07:33 PSTIdentified - The nodeJS team has identified the issues and are working with DO to resolve it (see more here: https://github.com/nodejs/build/issues/562#issuecomment-26604062).Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 08:12 PSTUpdate - The NodeJS team have reported that a fix for the issue is being deployed, we will continue to monitor and update as neededDec 9, 08:02 PSTUpdate - We are monitoring the NodeJS situation (see https://github.com/nodejs/build/issues/562 ) and will update when the upstream status changesDec 9, 07:33 PSTIdentified - The nodeJS team has identified the issues and are working with DO to resolve it (see more here: https://github.com/nodejs/build/issues/562#issuecomment-26604062).Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 08:02 PSTUpdate - We are monitoring the NodeJS situation (see https://github.com/nodejs/build/issues/562 ) and will update when the upstream status changesDec 9, 07:33 PSTIdentified - The nodeJS team has identified the issues and are working with DO to resolve it (see more here: https://github.com/nodejs/build/issues/562#issuecomment-26604062).Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 07:33 PSTIdentified - The nodeJS team has identified the issues and are working with DO to resolve it (see more here: https://github.com/nodejs/build/issues/562#issuecomment-26604062).Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

NodeJS slow/unable to download

Dec 9, 07:15 PSTInvestigating - We are investigating an issue where NodeJS is very slow / unable to be downloaded. We'll update again as soon as we have more information.

Last Update: A few months ago

Linux builds queueing

Nov 23, 14:33 PSTResolved - This incident has been resolved.Nov 23, 14:22 PSTUpdate - We're continuing to monitor the build queues closely to ensure they drain. We'll update again in ~10 minutes. If you see anything that looks out of the ordinary, please reach out to us at support@.Nov 23, 13:49 PSTMonitoring - We've deployed a fix, and are watching closely to ensure the situation is resolved. Thanks for your patience.Nov 23, 13:40 PSTIdentified - We believe we've identified the issue and are working on a fix. We've deployed more resources to help drain the queue and will update again in ~20 minutes.Nov 23, 13:14 PSTInvestigating - We are currently investigating intermittent higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Linux builds queueing

Nov 23, 14:22 PSTUpdate - We're continuing to monitor the build queues closely to ensure they drain. We'll update again in ~10 minutes. If you see anything that looks out of the ordinary, please reach out to us at support@.Nov 23, 13:49 PSTMonitoring - We've deployed a fix, and are watching closely to ensure the situation is resolved. Thanks for your patience.Nov 23, 13:40 PSTIdentified - We believe we've identified the issue and are working on a fix. We've deployed more resources to help drain the queue and will update again in ~20 minutes.Nov 23, 13:14 PSTInvestigating - We are currently investigating intermittent higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Linux builds queueing

Nov 23, 13:49 PSTMonitoring - We've deployed a fix, and are watching closely to ensure the situation is resolved. Thanks for your patience.Nov 23, 13:40 PSTIdentified - We believe we've identified the issue and are working on a fix. We've deployed more resources to help drain the queue and will update again in ~20 minutes.Nov 23, 13:14 PSTInvestigating - We are currently investigating intermittent higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Linux builds queueing

Nov 23, 13:40 PSTIdentified - We believe we've identified the issue and are working on a fix. We've deployed more resources to help drain the queue and will update again in ~20 minutes.Nov 23, 13:14 PSTInvestigating - We are currently investigating intermittent higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Linux builds queueing

Nov 23, 13:14 PSTInvestigating - We are currently investigating intermittent higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Linux builds queueing

Nov 23, 13:14 PSTInvestigating - We are currently investigating intermittent higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

UI slow to respond

Nov 21, 10:17 PSTResolved - This incident has been resolved.Nov 21, 09:50 PSTMonitoring - We’ve resolved the issue with the slow API request responses. We’ll continue to monitor closely.Nov 21, 09:25 PSTIdentified - We've identified the root cause and have deployed a change. We're monitoring to ensure the change is appropriate and will update again in ~20 minutes.Nov 21, 08:53 PSTInvestigating - API request responses are slow. We're looking into the cause and will update as soon as we know more.

Last Update: A few months ago

UI slow to respond

Nov 21, 09:50 PSTMonitoring - We’ve resolved the issue with the slow API request responses. We’ll continue to monitor closely.Nov 21, 09:25 PSTIdentified - We've identified the root cause and have deployed a change. We're monitoring to ensure the change is appropriate and will update again in ~20 minutes.Nov 21, 08:53 PSTInvestigating - API request responses are slow. We're looking into the cause and will update as soon as we know more.

Last Update: A few months ago

UI slow to respond

Nov 21, 09:25 PSTIdentified - We've identified the root cause and have deployed a change. We're monitoring to ensure the change is appropriate and will update again in ~20 minutes.Nov 21, 08:53 PSTInvestigating - API request responses are slow. We're looking into the cause and will update as soon as we know more.

Last Update: A few months ago

UI slow to respond

Nov 21, 08:53 PSTInvestigating - API request responses are slow. We're looking into the cause and will update as soon as we know more.

Last Update: A few months ago

OS X builds queueing

Nov 15, 15:40 PSTResolved - The OS X run queue has been fully drained and services are running normally, thank you for your patience as we dealt with this incident.Nov 15, 15:18 PSTUpdate - The OS X queue is continuing to drain, we will continue to monitor until it reaches zero. We will update again in about 20 minutes.Nov 15, 14:55 PSTMonitoring - We have fixed the underlying issue and are monitoring the OS X queue until it drains, we will update in 20 minutesNov 15, 14:37 PSTIdentified - We have identified a possible reason for the OS X run queue, will update again in 20 minutes or sooner after we have confirmedNov 15, 14:14 PSTUpdate - We are still exploring the cause of the increase in the run queue. We will update again in 20 minutes.Nov 15, 13:45 PSTInvestigating - We're seeing a spike in OS X run queue and are investigating.

Last Update: A few months ago

OS X builds queueing

Nov 15, 15:18 PSTUpdate - The OS X queue is continuing to drain, we will continue to monitor until it reaches zero. We will update again in about 20 minutes.Nov 15, 14:55 PSTMonitoring - We have fixed the underlying issue and are monitoring the OS X queue until it drains, we will update in 20 minutesNov 15, 14:37 PSTIdentified - We have identified a possible reason for the OS X run queue, will update again in 20 minutes or sooner after we have confirmedNov 15, 14:14 PSTUpdate - We are still exploring the cause of the increase in the run queue. We will update again in 20 minutes.Nov 15, 13:45 PSTInvestigating - We're seeing a spike in OS X run queue and are investigating.

Last Update: A few months ago

OS X builds queueing

Nov 15, 14:55 PSTMonitoring - We have fixed the underlying issue and are monitoring the OS X queue until it drains, we will update in 20 minutesNov 15, 14:37 PSTIdentified - We have identified a possible reason for the OS X run queue, will update again in 20 minutes or sooner after we have confirmedNov 15, 14:14 PSTUpdate - We are still exploring the cause of the increase in the run queue. We will update again in 20 minutes.Nov 15, 13:45 PSTInvestigating - We're seeing a spike in OS X run queue and are investigating.

Last Update: A few months ago

OS X builds queueing

Nov 15, 14:37 PSTIdentified - We have identified a possible reason for the OS X run queue, will update again in 20 minutes or sooner after we have confirmedNov 15, 14:14 PSTUpdate - We are still exploring the cause of the increase in the run queue. We will update again in 20 minutes.Nov 15, 13:45 PSTInvestigating - We're seeing a spike in OS X run queue and are investigating.

Last Update: A few months ago

OS X builds queueing

Nov 15, 14:14 PSTUpdate - We are still exploring the cause of the increase in the run queue. We will update again in 20 minutes.Nov 15, 13:45 PSTInvestigating - We're seeing a spike in OS X run queue and are investigating.

Last Update: A few months ago

OS X builds queueing

Nov 15, 13:45 PSTInvestigating - We're seeing a spike in OS X run queue and are investigating.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 3, 00:22 PDT Resolved - This incident has been resolved.Nov 2, 23:43 PDT Update - We’re continuing to monitor closely, and will update again in ~20 minutes or as more info is available.Nov 2, 23:05 PDT Monitoring - We think we’ve identified the issue and have implemented a fix. We’re continuing to monitor closely, and will update again in ~20 minutes or as more info is available.Nov 2, 22:39 PDT Update - We're continuing to dig in to the issues we’re seeing with our site load balancer. CircleCI.com is back up, and we’re investigating to determine root cause for the issue.Nov 2, 22:11 PDT Update - We're continuing to dig in to the site loading issues. We'll update again in ~20 min or as soon as new information becomes available.Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 2, 23:43 PDT Update - We’re continuing to monitor closely, and will update again in ~20 minutes or as more info is available.Nov 2, 23:05 PDT Monitoring - We think we’ve identified the issue and have implemented a fix. We’re continuing to monitor closely, and will update again in ~20 minutes or as more info is available.Nov 2, 22:39 PDT Update - We're continuing to dig in to the issues we’re seeing with our site load balancer. CircleCI.com is back up, and we’re investigating to determine root cause for the issue.Nov 2, 22:11 PDT Update - We're continuing to dig in to the site loading issues. We'll update again in ~20 min or as soon as new information becomes available.Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 2, 23:05 PDT Monitoring - We think we’ve identified the issue and have implemented a fix. We’re continuing to monitor closely, and will update again in ~20 minutes or as more info is available.Nov 2, 22:39 PDT Update - We're continuing to dig in to the issues we’re seeing with our site load balancer. CircleCI.com is back up, and we’re investigating to determine root cause for the issue.Nov 2, 22:11 PDT Update - We're continuing to dig in to the site loading issues. We'll update again in ~20 min or as soon as new information becomes available.Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 2, 22:39 PDT Update - We're continuing to dig in to the issues we’re seeing with our site load balancer. CircleCI.com is back up, and we’re investigating to determine root cause for the issue.Nov 2, 22:11 PDT Update - We're continuing to dig in to the site loading issues. We'll update again in ~20 min or as soon as new information becomes available.Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 2, 22:11 PDT Update - We're continuing to dig in to the site loading issues. We'll update again in ~20 min or as soon as new information becomes available.Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 2, 22:11 PDT Update - We're continuing to dig in to the site loading issues. We'll update again in ~20 min or as soon as new information becomes available.Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

CircleCI website showing 503 errors

Nov 2, 21:46 PDT Investigating - We’re investigating why the CircleCI website is failing to load. We're looking into the issue and will update with more information as soon as we have it.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 21:37 PDT Resolved - This incident has been resolved.Oct 26, 21:37 PDT Update - The backlog has been drained and builds are flowing normally. We truly appreciate everyone's patience and support.Oct 26, 18:52 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~25 min.Oct 26, 18:24 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~25 min.Oct 26, 17:41 PDT Update - We continue to work to clear the backlog from today's earlier iOS outage. We appreciate your patience, and will update again in ~25 min.Oct 26, 17:16 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~20 min.Oct 26, 16:58 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 18:52 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~25 min.Oct 26, 18:24 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~25 min.Oct 26, 17:41 PDT Update - We continue to work to clear the backlog from today's earlier iOS outage. We appreciate your patience, and will update again in ~25 min.Oct 26, 17:16 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~20 min.Oct 26, 16:58 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 18:24 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~25 min.Oct 26, 17:41 PDT Update - We continue to work to clear the backlog from today's earlier iOS outage. We appreciate your patience, and will update again in ~25 min.Oct 26, 17:16 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~20 min.Oct 26, 16:58 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 17:41 PDT Update - We continue to work to clear the backlog from today's earlier iOS outage. We appreciate your patience, and will update again in ~25 min.Oct 26, 17:16 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~20 min.Oct 26, 16:58 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 17:16 PDT Update - We're continuing to work to clear the backlog from today's earlier iOS outage. We'll update again in ~20 min.Oct 26, 16:58 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 16:58 PDT Update - We continue to work to clear the backlog from today's earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 16:17 PDT Update - We're continuing to work to clear the backlog from today's earlier outage. We'll update again in ~20 min.Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 15:51 PDT Update - We're continuing to work to clear the large backlog from today's earlier outage, which, coupled with our normal queue, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 15:25 PDT Update - We continue to work to clear the large backlog from today's earlier outage, which, combined with our normal load, is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 15:04 PDT Update - We're continuing to work to clear the rather large backlog from today's earlier outage, which combined with our normal load is causing the queue time to be lengthier than usual. We'll update again in ~20 min.Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Lengthy iOS queue

Oct 26, 14:33 PDT Monitoring - Today’s earlier outage generated a rather large backlog which combined with our normal load is causing our queue time to be lengthier than usual. We’re working to clear the queue as quickly as possible.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 14:32 PDT Resolved - This incident has been resolved.Oct 26, 14:18 PDT Update - We're continuing to monitor the queue closely as it drains. iOS users can expect delays as we clear the backlog. Will update again in ~20 min.Oct 26, 13:47 PDT Update - We continue to monitor the queue closely as it drains. iOS users can expect delays as we clear the backlog. Will update again in ~20 min.Oct 26, 13:25 PDT Update - We have cleared the root incident in both the OSX outage and our fleet scaling. We're monitoring the queue closely and iOS users can expect delays as we clear the backlog. We're working to get it drained as quickly as possible, and will update again in ~20 min.Oct 26, 12:53 PDT Update - We’re continuing to monitor closely as we clear the queue. Will update again in ~20 min.Oct 26, 12:28 PDT Update - We're continuing to closely monitor the results of the fix to our fleet as well as the queue. We'll update again in ~20 min.Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 14:18 PDT Update - We're continuing to monitor the queue closely as it drains. iOS users can expect delays as we clear the backlog. Will update again in ~20 min.Oct 26, 13:47 PDT Update - We continue to monitor the queue closely as it drains. iOS users can expect delays as we clear the backlog. Will update again in ~20 min.Oct 26, 13:25 PDT Update - We have cleared the root incident in both the OSX outage and our fleet scaling. We're monitoring the queue closely and iOS users can expect delays as we clear the backlog. We're working to get it drained as quickly as possible, and will update again in ~20 min.Oct 26, 12:53 PDT Update - We’re continuing to monitor closely as we clear the queue. Will update again in ~20 min.Oct 26, 12:28 PDT Update - We're continuing to closely monitor the results of the fix to our fleet as well as the queue. We'll update again in ~20 min.Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 13:47 PDT Update - We continue to monitor the queue closely as it drains. iOS users can expect delays as we clear the backlog. Will update again in ~20 min.Oct 26, 13:25 PDT Update - We have cleared the root incident in both the OSX outage and our fleet scaling. We're monitoring the queue closely and iOS users can expect delays as we clear the backlog. We're working to get it drained as quickly as possible, and will update again in ~20 min.Oct 26, 12:53 PDT Update - We’re continuing to monitor closely as we clear the queue. Will update again in ~20 min.Oct 26, 12:28 PDT Update - We're continuing to closely monitor the results of the fix to our fleet as well as the queue. We'll update again in ~20 min.Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 13:25 PDT Update - We have cleared the root incident in both the OSX outage and our fleet scaling. We're monitoring the queue closely and iOS users can expect delays as we clear the backlog. We're working to get it drained as quickly as possible, and will update again in ~20 min.Oct 26, 12:53 PDT Update - We’re continuing to monitor closely as we clear the queue. Will update again in ~20 min.Oct 26, 12:28 PDT Update - We're continuing to closely monitor the results of the fix to our fleet as well as the queue. We'll update again in ~20 min.Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 12:53 PDT Update - We’re continuing to monitor closely as we clear the queue. Will update again in ~20 min.Oct 26, 12:28 PDT Update - We're continuing to closely monitor the results of the fix to our fleet as well as the queue. We'll update again in ~20 min.Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 12:28 PDT Update - We're continuing to closely monitor the results of the fix to our fleet as well as the queue. We'll update again in ~20 min.Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 11:54 PDT Monitoring - A fix has been implemented and we are monitoring the results.Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 11:50 PDT Update - We're closely monitoring the fix we deployed to our fleet. We will continue to monitor and will update again in ~20 min.Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 11:24 PDT Update - We have identified the issue that was causing our fleet to not process builds at it's proper rate and have deployed a fix. We will be monitoring this change and will update in ~20 min.Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 10:55 PDT Update - We are working to get our fleet restored to it's normal level and are working to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 10:35 PDT Update - We're continuing to work to clear the extreme queue buildup that occurred during to the earlier outage. We appreciate your patience, and will update again in ~20 min.Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 10:12 PDT Update - We're continuing to work to clear the queue buildup that occurred during to the earlier outage. We're monitoring it closely, and will update again in ~20 min.Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 09:37 PDT Identified - Our OSX infrastructure provider has resolved our network outage. We are currently working to clear the queue and are monitoring it closely as it drains. We'll update again in ~20 min.Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 09:14 PDT Update - We continue to work to discover root cause on iOS build failures with our OSX infrastructure provider. We'll update again in ~20 or as soon as we have new information.Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 08:34 PDT Update - We're continuing to work with our OSX infrastructure provider on the network issues causing iOS build failures. Thanks for your continued patience, we'll update again in ~20min.Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 08:08 PDT Update - We're still digging into the connectivity and latency issues in our Mac datacenters with our OSX infrastructure provider. We appreciate your patience, and will update again in ~20 min.Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 07:41 PDT Update - We're continuing to work closely with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again in ~20min.Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 07:16 PDT Update - We continue to work with our OSX infrastructure provider to dig in to the network connectivity and latency issues in our Mac datacenters. Will update again in ~20 min or as new information becomes available.Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 06:51 PDT Update - We're continuing to work with our OSX infrastructure provider to investigate the network issues causing iOS build failures. Will update again as soon as we have more information.Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Failing iOS builds due to network connectivity/latency issues

Oct 26, 05:55 PDT Investigating - We are experiencing ongoing network connectivity and latency issues in our Mac datacenters. All network requests are impacted by this issue causing network-related iOS build failures.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 12:47 PDT Resolved - We're seeing builds flow through our system as our upstream vendors finish deploying their solutions. We'll monitor closely, please reach out if you're seeing anything that looks amiss.Oct 21, 11:29 PDT Update - We continue to monitor the status of a DDoS attack of our upstream DNS provider. We'll update if new information becomes available.Oct 21, 10:48 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack, and thank you for your patience. We'll update if new information becomes available.Oct 21, 09:53 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack. The DDoS is preventing us from resolving names managed by that provider, and impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.Oct 21, 09:19 PDT Identified - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 11:29 PDT Update - We continue to monitor the status of a DDoS attack of our upstream DNS provider. We'll update if new information becomes available.Oct 21, 10:48 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack, and thank you for your patience. We'll update if new information becomes available.Oct 21, 09:53 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack. The DDoS is preventing us from resolving names managed by that provider, and impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.Oct 21, 09:19 PDT Identified - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 10:48 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack, and thank you for your patience. We'll update if new information becomes available.Oct 21, 09:53 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack. The DDoS is preventing us from resolving names managed by that provider, and impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.Oct 21, 09:19 PDT Identified - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 09:53 PDT Update - We’re continuing to monitor our upstream DNS provider experiencing a DDoS attack. The DDoS is preventing us from resolving names managed by that provider, and impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.Oct 21, 09:19 PDT Identified - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We'll update as more information becomes available.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 07:04 PDT Resolved - This incident has been resolved.Oct 21, 06:18 PDT Update - Affected DNS names are resolving again and builds are running. We are monitoring and preparing to scale as systems recover and load increases.Oct 21, 05:46 PDT Monitoring - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We are looking into actions we can take to mitigate.Oct 21, 04:48 PDT Investigating - We're experiencing errors resolving DNS names on our backend that are causing errors on our website.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 06:18 PDT Update - Affected DNS names are resolving again and builds are running. We are monitoring and preparing to scale as systems recover and load increases.Oct 21, 05:46 PDT Monitoring - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We are looking into actions we can take to mitigate.Oct 21, 04:48 PDT Investigating - We're experiencing errors resolving DNS names on our backend that are causing errors on our website.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 05:46 PDT Monitoring - One of our upstream DNS providers is experiencing a DDoS attack preventing us from resolving names managed by that provider. This is impacting our build containers and our backend services, causing website errors and preventing us from running builds. We are looking into actions we can take to mitigate.Oct 21, 04:48 PDT Investigating - We're experiencing errors resolving DNS names on our backend that are causing errors on our website.

Last Update: A few months ago

Upstream DNS errors

Oct 21, 04:48 PDT Investigating - We're experiencing errors resolving DNS names on our backend that are causing errors on our website.

Last Update: A few months ago

Upstream DNS errors causing website loading issues.

Oct 21, 04:48 PDT Investigating - We're experiencing errors resolving DNS names on our backend that are causing errors on our website.

Last Update: A few months ago

Unable to Login

Aug 24, 20:06 PDT Resolved - We are marking this incident as resolved, if any questions remain please do contact our Support Team.Aug 24, 19:22 PDT Monitoring - We've rolled out a solution to the login issue and are monitoring the situation. Please see https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details ~ will update again in 30 minutes.Aug 24, 18:32 PDT Update - Our engineers are continuing to fix the issue and implement a solution. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details ~ will update again in 20 minutes.Aug 24, 17:04 PDT Update - We have discovered what is causing the issue and are working to working on how to fix it. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 15:38 PDT Update - We are continuing to explore a solution to this issue and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for detailsAug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 19:22 PDT Monitoring - We've rolled out a solution to the login issue and are monitoring the situation. Please see https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details ~ will update again in 30 minutes.Aug 24, 18:32 PDT Update - Our engineers are continuing to fix the issue and implement a solution. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details ~ will update again in 20 minutes.Aug 24, 17:04 PDT Update - We have discovered what is causing the issue and are working to working on how to fix it. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 15:38 PDT Update - We are continuing to explore a solution to this issue and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for detailsAug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 18:32 PDT Update - Our engineers are continuing to fix the issue and implement a solution. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details ~ will update again in 20 minutes.Aug 24, 17:04 PDT Update - We have discovered what is causing the issue and are working to working on how to fix it. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 15:38 PDT Update - We are continuing to explore a solution to this issue and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for detailsAug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 18:32 PDT Update - Our engineers are continuing to fix the issue and implement a solution. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details ~ will update again in 20 minutes.Aug 24, 17:04 PDT Update - We have discovered what is causing the issue and are working to working on how to fix it. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 15:38 PDT Update - We are continuing to explore a solution to this issue and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for detailsAug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 17:04 PDT Update - We have discovered what is causing the issue and are working to working on how to fix it. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 15:38 PDT Update - We are continuing to explore a solution to this issue and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for detailsAug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 15:38 PDT Update - We are continuing to explore a solution to this issue and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for detailsAug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 14:42 PDT Update - We are still working on a solution for this event and will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 14:19 PDT Update - We have identified the cause and are working on a solution. We will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 13:39 PDT Update - We have identified a potential location that is causing the GitHub Login issue and are working on it. Will update again in ~20 minutes. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2 for details.Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 13:07 PDT Update - We are still working on resolving this issue and will update again in 20 minutes - https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/2Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Unable to Login

Aug 24, 12:12 PDT Identified - We are seeing an increase in login issues from expired or revoked GitHub tokens. See https://discuss.circleci.com/t/incident-unable-to-login-due-to-revoked-github-token/5856/1 for details.

Last Update: A few months ago

Missing GitHub webhooks

Aug 18, 18:10 PDT Resolved - GitHub webhooks are flowing. No further issues anticipated. We will continue to closely monitor but please reach out to support@circleci.com if you're having any trouble.Aug 18, 17:48 PDT Monitoring - We're receiving GitHub webhooks and will continue to monitor closely. We'll update again in 20 minutes.Aug 18, 17:40 PDT Investigating - "We are currently investigating an issue with missing GitHub webhooks ~ we will update in 20 minutes"

Last Update: A few months ago

Missing GitHub webhooks

Aug 18, 17:48 PDT Monitoring - We're receiving GitHub webhooks and will continue to monitor closely. We'll update again in 20 minutes.Aug 18, 17:40 PDT Investigating - "We are currently investigating an issue with missing GitHub webhooks ~ we will update in 20 minutes"

Last Update: A few months ago

Missing GitHub webhooks

Aug 18, 17:40 PDT Investigating - "We are currently investigating an issue with missing GitHub webhooks ~ we will update in 20 minutes"

Last Update: A few months ago

Queued builds

Jul 29, 12:35 PDT Resolved - We have fixed the issue and will continue to closely monitor the build system. Please don't hesitate to contact support at support@circleci.com if you experience any further issues.Jul 29, 12:14 PDT Monitoring - The queuing has returned to normal, we will continue to monitor the situation.Jul 29, 12:03 PDT Investigating - We are currently investigating higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Queued builds

Jul 29, 12:14 PDT Monitoring - The queuing has returned to normal, we will continue to monitor the situation.Jul 29, 12:03 PDT Investigating - We are currently investigating higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

Queued builds

Jul 29, 12:03 PDT Investigating - We are currently investigating higher than average build queues, will update as soon as we have more information.

Last Update: A few months ago

OSX Builds Queueing

Jul 21, 16:48 PDT Resolved - The OSX build system is back to normal, please reach out to support at support@circleci.com if you experience any further issuesJul 21, 16:22 PDT Monitoring - The OS X queue has drained. We're monitoring closely. If you're continuing to see anything out of the ordinary, please reach out to our support team at support@circleci.com and they will take a look.Jul 21, 16:04 PDT Update - We're still experiencing a higher OS X load than usual. We've added some queue balancing to help share the resources across builds, and we're continuing to search for root cause. We'll update again in ~20 minutes.Jul 21, 15:42 PDT Investigating - We're currently experiencing longer queue times than usual with OSX builds due to a large spike in build traffic. We're investigating possible solutions, and will update in ~20 minutes.

Last Update: A few months ago

OSX Builds Queueing

Jul 21, 16:22 PDT Monitoring - The OS X queue has drained. We're monitoring closely. If you're continuing to see anything out of the ordinary, please reach out to our support team at support@circleci.com and they will take a look.Jul 21, 16:04 PDT Update - We're still experiencing a higher OS X load than usual. We've added some queue balancing to help share the resources across builds, and we're continuing to search for root cause. We'll update again in ~20 minutes.Jul 21, 15:42 PDT Investigating - We're currently experiencing longer queue times than usual with OSX builds due to a large spike in build traffic. We're investigating possible solutions, and will update in ~20 minutes.

Last Update: A few months ago

OSX Builds Queueing

Jul 21, 16:04 PDT Update - We're still experiencing a higher OS X load than usual. We've added some queue balancing to help share the resources across builds, and we're continuing to search for root cause. We'll update again in ~20 minutes.Jul 21, 15:42 PDT Investigating - We're currently experiencing longer queue times than usual with OSX builds due to a large spike in build traffic. We're investigating possible solutions, and will update in ~20 minutes.

Last Update: A few months ago

OSX Builds Queueing

Jul 21, 15:42 PDT Investigating - We're currently experiencing longer queue times than usual with OSX builds due to a large spike in build traffic. We're investigating possible solutions, and will update in ~20 minutes.

Last Update: A few months ago

CircleCI Website is offline

Jul 20, 11:49 PDT Resolved - This incident has been resolved.Jul 20, 11:32 PDT Monitoring - The update we deployed is working and we are monitoring closely for any further issues. If you're seeing anything amiss, please reach out to our support engineers at support@circleci.com.Jul 20, 11:14 PDT Update - We have identified the possible cause and have deployed a change to address. We're monitoring that change, and will update again in ~20 minutes or when we have more information.Jul 20, 10:42 PDT Investigating - We’re investigating why our CircleCI website is failing to load.

Last Update: A few months ago

CircleCI Website is offline

Jul 20, 11:32 PDT Monitoring - The update we deployed is working and we are monitoring closely for any further issues. If you're seeing anything amiss, please reach out to our support engineers at support@circleci.com.Jul 20, 11:14 PDT Update - We have identified the possible cause and have deployed a change to address. We're monitoring that change, and will update again in ~20 minutes or when we have more information.Jul 20, 10:42 PDT Investigating - We’re investigating why our CircleCI website is failing to load.

Last Update: A few months ago

CircleCI Website is offline

Jul 20, 11:14 PDT Update - We have identified the possible cause and have deployed a change to address. We're monitoring that change, and will update again in ~20 minutes or when we have more information.Jul 20, 10:42 PDT Investigating - We’re investigating why our CircleCI website is failing to load.

Last Update: A few months ago

CircleCI Website is offline

Jul 20, 10:42 PDT Investigating - We’re investigating why our CircleCI website is failing to load.

Last Update: A few months ago

CircleCI Website is offline

Jul 20, 10:42 PDT Investigating - We’re investigating why our CircleCI website is failing to load.

Last Update: A few months ago

Builds are queueing

Jul 19, 14:11 PDT Resolved - We're continuing to monitor closely for any further hiccups. If you're seeing anything that looks amiss, please email our support team at support@circleci.com and we'll dig in.Jul 19, 13:42 PDT Update - We're continuing to monitor the queue closely and have allocated extra resources to work through it as quickly as possible following the AWS API connectivity issues. We'll update again in ~20 minutes.Jul 19, 13:19 PDT Monitoring - The AWS API degraded service has cleared. We’re continuing to monitor closely as the queue drains, and will update again in ~20 minutes.Jul 19, 12:50 PDT Update - We’re continuing to monitor AWS’ API connectivity issues as we work through builds. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:30 PDT Update - We’re continuing to work through builds as we monitor AWS’ API connectivity issues. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 13:42 PDT Update - We're continuing to monitor the queue closely and have allocated extra resources to work through it as quickly as possible following the AWS API connectivity issues. We'll update again in ~20 minutes.Jul 19, 13:19 PDT Monitoring - The AWS API degraded service has cleared. We’re continuing to monitor closely as the queue drains, and will update again in ~20 minutes.Jul 19, 12:50 PDT Update - We’re continuing to monitor AWS’ API connectivity issues as we work through builds. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:30 PDT Update - We’re continuing to work through builds as we monitor AWS’ API connectivity issues. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 13:19 PDT Monitoring - The AWS API degraded service has cleared. We’re continuing to monitor closely as the queue drains, and will update again in ~20 minutes.Jul 19, 12:50 PDT Update - We’re continuing to monitor AWS’ API connectivity issues as we work through builds. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:30 PDT Update - We’re continuing to work through builds as we monitor AWS’ API connectivity issues. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 12:50 PDT Update - We’re continuing to monitor AWS’ API connectivity issues as we work through builds. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:30 PDT Update - We’re continuing to work through builds as we monitor AWS’ API connectivity issues. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 12:30 PDT Update - We’re continuing to work through builds as we monitor AWS’ API connectivity issues. We’ll update again in ~20 minutes or as new information is available.Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 12:06 PDT Update - We are continuing to monitor AWS’ API connectivity issues and are scaling to meet queue demand. We’ll update again in ~20 minutes or as new information is available.Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 11:44 PDT Identified - We have identified the issue with AWS' API. We're scaling up available resources, and are monitoring closely. We'll update again in ~20 minutes.Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

Builds are queueing

Jul 19, 11:27 PDT Investigating - We are seeing issues within AWS which is causing our run queue to grow.

Last Update: A few months ago

CircleCI web site is offline

Jul 15, 18:24 PDT Resolved - This incident has been resolved.Jul 15, 17:53 PDT Monitoring - We have deployed a fix and the site is back online. We will continue monitoring for any issues.Jul 15, 17:45 PDT Identified - We have identified the core issue and are working to fix it. Will update again in ~20 minutes.Jul 15, 17:41 PDT Investigating - We are exploring why our CircleCI website is offline.

Last Update: A few months ago

CircleCI web site is offline

Jul 15, 17:53 PDT Monitoring - We have deployed a fix and the site is back online. We will continue monitoring for any issues.Jul 15, 17:45 PDT Identified - We have identified the core issue and are working to fix it. Will update again in ~20 minutes.Jul 15, 17:41 PDT Investigating - We are exploring why our CircleCI website is offline.

Last Update: A few months ago

CircleCI web site is offline

Jul 15, 17:45 PDT Identified - We have identified the core issue and are working to fix it. Will update again in ~20 minutes.Jul 15, 17:41 PDT Investigating - We are exploring why our CircleCI website is offline.

Last Update: A few months ago

CircleCI web site is offline

Jul 15, 17:45 PDT Identified - We have identified the core issue and are working to fix it. Will update again in ~20 minutes.Jul 15, 17:41 PDT Investigating - We are exploring why our CircleCI website is offline.

Last Update: A few months ago

CircleCI web site is offline

Jul 15, 17:41 PDT Investigating - We are exploring why our CircleCI website is offline.

Last Update: A few months ago

GitHub Outage

Jul 12, 10:36 PDT Resolved - GitHub events have been flowing and their status page shows them as Operational. We will continue to monitor things closely and please do reach out to our Support team if you have any questions.Jul 12, 10:04 PDT Update - We are continuing to closely monitor GitHub's status and recovery, and we are scaling to meet the expected influx of builds. Another update in 20(ish) minutes.Jul 12, 09:43 PDT Monitoring - GitHub is actively working to restore service and while you may be able to log in we are still not receiving build notifications. Will update in 20 minutes or when status changes.Jul 12, 09:25 PDT Identified - GitHub is currently showing an outage which impacts our ability to handle logins and receive build events, we are monitoring their status closely.

Last Update: A few months ago

GitHub Outage

Jul 12, 10:04 PDT Update - We are continuing to closely monitor GitHub's status and recovery, and we are scaling to meet the expected influx of builds. Another update in 20(ish) minutes.Jul 12, 09:43 PDT Monitoring - GitHub is actively working to restore service and while you may be able to log in we are still not receiving build notifications. Will update in 20 minutes or when status changes.Jul 12, 09:25 PDT Identified - GitHub is currently showing an outage which impacts our ability to handle logins and receive build events, we are monitoring their status closely.

Last Update: A few months ago

GitHub Outage

Jul 12, 10:04 PDT Update - We are continuing to closely monitor GitHub's status and recovery, and we are scaling to meet the expected influx of builds. Another update in 20(ish) minutes.Jul 12, 09:43 PDT Monitoring - GitHub is actively working to restore service and while you may be able to log in we are still not receiving build notifications. Will update in 20 minutes or when status changes.Jul 12, 09:25 PDT Identified - GitHub is currently showing an outage which impacts our ability to handle logins and receive build events, we are monitoring their status closely.

Last Update: A few months ago

GitHub Outage

Jul 12, 09:43 PDT Monitoring - GitHub is actively working to restore service and while you may be able to log in we are still not receiving build notifications. Will update in 20 minutes or when status changes.Jul 12, 09:25 PDT Identified - GitHub is currently showing an outage which impacts our ability to handle logins and receive build events, we are monitoring their status closely.

Last Update: A few months ago

GitHub Outage

Jul 12, 09:25 PDT Identified - GitHub is currently showing an outage which impacts our ability to handle logins and receive build events, we are monitoring their status closely.

Last Update: A few months ago

GitHub Outage

Jul 12, 09:25 PDT Identified - GitHub is currently showing an outage which impacts our ability to handle logins and receive build events, we are monitoring their status closely.

Last Update: A few months ago

Increased OS X queue times

Jul 1, 00:09 PDT Resolved - We've fully recovered.Jun 30, 23:18 PDT Monitoring - We've recovered and will continue to monitor. Please let us know if you see unexpectedly high queue times.Jun 30, 22:54 PDT Investigating - We lost several of our OS X builder instances and so we're running at reduced capacity. We anticipate increased queue times until we recover.

Last Update: A few months ago

Increased OS X queue times

Jun 30, 23:18 PDT Monitoring - We've recovered and will continue to monitor. Please let us know if you see unexpectedly high queue times.Jun 30, 22:54 PDT Investigating - We lost several of our OS X builder instances and so we're running at reduced capacity. We anticipate increased queue times until we recover.

Last Update: A few months ago

Increased OS X queue times

Jun 30, 22:54 PDT Investigating - We lost several of our OS X builder instances and so we're running at reduced capacity. We anticipate increased queue times until we recover.

Last Update: A few months ago

Delayed Builds

Jun 29, 23:07 PDT Resolved - Builds continue to be processed normally. Please let us know in support if you notice any issues.Jun 29, 22:32 PDT Monitoring - We've resolved the issues caused by AWS network incident and builds are back on track. Continuing to monitor.Jun 29, 22:08 PDT Identified - A recent network incident in AWS has left us with some stuck builds. We're cleaning up now. Update in 20 mins.

Last Update: A few months ago

Delayed Builds

Jun 29, 22:32 PDT Monitoring - We've resolved the issues caused by AWS network incident and builds are back on track. Continuing to monitor.Jun 29, 22:08 PDT Identified - A recent network incident in AWS has left us with some stuck builds. We're cleaning up now. Update in 20 mins.

Last Update: A few months ago

Delayed Builds

Jun 29, 22:08 PDT Identified - A recent network incident in AWS has left us with some stuck builds. We're cleaning up now. Update in 20 mins.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 03:33 PDT Resolved - We’re continuing to monitor the queue closely. If you see anything that looks unusual, please reach out to support@circleci.com. Thanks for your patience.Jun 27, 03:08 PDT Monitoring - Service has been restored and we’re seeing builds starting again. We’ve added capacity to work through any queue and we’re monitoring closely.Jun 27, 02:46 PDT Identified - We’ve identified the issue and are working to bring the service back online. We're scaling to meet demand as soon as we've fully returned. We'll update again in ~20 minutes or as soon as we know more. Thanks for your patience.Jun 27, 02:31 PDT Update - We’re investigating the database issue we’re seeing, which is causing CircleCI to be unreachable and builds not to run. We’ll update in ~20 minutes or as soon as we know more details.Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience. We'll update once we know more.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 03:08 PDT Monitoring - Service has been restored and we’re seeing builds starting again. We’ve added capacity to work through any queue and we’re monitoring closely.Jun 27, 02:46 PDT Identified - We’ve identified the issue and are working to bring the service back online. We're scaling to meet demand as soon as we've fully returned. We'll update again in ~20 minutes or as soon as we know more. Thanks for your patience.Jun 27, 02:31 PDT Update - We’re investigating the database issue we’re seeing, which is causing CircleCI to be unreachable and builds not to run. We’ll update in ~20 minutes or as soon as we know more details.Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience. We'll update once we know more.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 02:46 PDT Identified - We’ve identified the issue and are working to bring the service back online. We're scaling to meet demand as soon as we've fully returned. We'll update again in ~20 minutes or as soon as we know more. Thanks for your patience.Jun 27, 02:31 PDT Update - We’re investigating the database issue we’re seeing, which is causing CircleCI to be unreachable and builds not to run. We’ll update in ~20 minutes or as soon as we know more details.Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience. We'll update once we know more.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 02:31 PDT Update - We’re investigating the database issue we’re seeing, which is causing CircleCI to be unreachable and builds not to run. We’ll update in ~20 minutes or as soon as we know more details.Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience. We'll update once we know more.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 02:31 PDT Update - We’re investigating the database issue we’re seeing, which is causing CircleCI to be unreachable and builds not to run. We’ll update in ~20 minutes or as soon as we know more details.Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience. We'll update once we know more.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience. We'll update once we know more.

Last Update: A few months ago

CircleCI website unavailable

Jun 27, 02:08 PDT Investigating - Our website is currently offline, we are working on restoring its availability. Thank you for your patience.

Last Update: A few months ago

Requests to circle-artifacts.com timing out

Jun 15, 13:28 PDT Resolved - We've rolled back. Please notify us if you see further issues.Jun 15, 13:16 PDT Investigating - A bug in our artifacts service is causing requests to time out. We're rolling back.

Last Update: A few months ago

Requests to circle-artifacts.com timing out

Jun 15, 13:16 PDT Investigating - A bug in our artifacts service is causing requests to time out. We're rolling back.

Last Update: A few months ago

Requests to circle-artifacts.com timing out

Jun 15, 13:16 PDT Investigating - A bug in our artifacts service is causing requests to time out. We're rolling back.

Last Update: A few months ago

AWS network issues + backed up builds

Jun 12, 22:50 PDT Resolved - Networking appears to have recovered and builds continue to run smoothly.Jun 12, 22:35 PDT Monitoring - We've cleared the backlog of builds and are monitoring the network situation closely.Jun 12, 22:20 PDT Identified - We're witnessing significant networking issues in AWS, despite nothing identified by AWS at this time. The resulting build failures and retries have caused a backlog of builds. We're adding capacity to get them through, but have our own challenges deploying amidst the network issues.

Last Update: A few months ago

AWS network issues + backed up builds

Jun 12, 22:35 PDT Monitoring - We've cleared the backlog of builds and are monitoring the network situation closely.Jun 12, 22:20 PDT Identified - We're witnessing significant networking issues in AWS, despite nothing identified by AWS at this time. The resulting build failures and retries have caused a backlog of builds. We're adding capacity to get them through, but have our own challenges deploying amidst the network issues.

Last Update: A few months ago

AWS network issues + backed up builds

Jun 12, 22:20 PDT Identified - We're witnessing significant networking issues in AWS, despite nothing identified by AWS at this time. The resulting build failures and retries have caused a backlog of builds. We're adding capacity to get them through, but have our own challenges deploying amidst the network issues.

Last Update: A few months ago

Site Offline

Jun 3, 09:52 PDT Resolved - This incident has been resolved.Jun 3, 09:22 PDT Monitoring - We had a database issue that was recovered with a failover. All services have recovered but monitoring closely.Jun 3, 09:12 PDT Investigating - We are looking into why our site is offline

Last Update: A few months ago

Site Offline

Jun 3, 09:22 PDT Monitoring - We had a database issue that was recovered with a failover. All services have recovered but monitoring closely.Jun 3, 09:12 PDT Investigating - We are looking into why our site is offline

Last Update: A few months ago

Site Offline

Jun 3, 09:12 PDT Investigating - We are looking into why our site is offline

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 19:38 PDT Resolved - We have not seen any site loading errors and are marking this as resolved.Jun 1, 18:54 PDT Monitoring - We have made changes to fix the issue and are monitoring the site. We will update again in ~25 minJun 1, 18:19 PDT Update - We continue to investigate the root issue of the site sporadically being slow or failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 17:32 PDT Update - We're continuing to investigate the root cause of the issue causing the site to intermittently fail to load. We'll update again in ~25 minutes.Jun 1, 17:08 PDT Update - We continue to investigate the root cause of the site sporadically failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 16:36 PDT Update - We're continuing to investigate the issues causing the site to intermittently fail to load. We'll update again in ~25 minutes. Thank you for your patience.Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 18:54 PDT Monitoring - We have made changes to fix the issue and are monitoring the site. We will update again in ~25 minJun 1, 18:19 PDT Update - We continue to investigate the root issue of the site sporadically being slow or failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 17:32 PDT Update - We're continuing to investigate the root cause of the issue causing the site to intermittently fail to load. We'll update again in ~25 minutes.Jun 1, 17:08 PDT Update - We continue to investigate the root cause of the site sporadically failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 16:36 PDT Update - We're continuing to investigate the issues causing the site to intermittently fail to load. We'll update again in ~25 minutes. Thank you for your patience.Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 18:19 PDT Update - We continue to investigate the root issue of the site sporadically being slow or failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 17:32 PDT Update - We're continuing to investigate the root cause of the issue causing the site to intermittently fail to load. We'll update again in ~25 minutes.Jun 1, 17:08 PDT Update - We continue to investigate the root cause of the site sporadically failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 16:36 PDT Update - We're continuing to investigate the issues causing the site to intermittently fail to load. We'll update again in ~25 minutes. Thank you for your patience.Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 17:32 PDT Update - We're continuing to investigate the root cause of the issue causing the site to intermittently fail to load. We'll update again in ~25 minutes.Jun 1, 17:08 PDT Update - We continue to investigate the root cause of the site sporadically failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 16:36 PDT Update - We're continuing to investigate the issues causing the site to intermittently fail to load. We'll update again in ~25 minutes. Thank you for your patience.Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 17:08 PDT Update - We continue to investigate the root cause of the site sporadically failing to load. We appreciate your understanding, and we'll update again in ~25 min.Jun 1, 16:36 PDT Update - We're continuing to investigate the issues causing the site to intermittently fail to load. We'll update again in ~25 minutes. Thank you for your patience.Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 16:36 PDT Update - We're continuing to investigate the issues causing the site to intermittently fail to load. We'll update again in ~25 minutes. Thank you for your patience.Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site intermittently failing to load

Jun 1, 16:10 PDT Investigating - We're again seeing issues with the site failing to load. We're investigating and will update in ~20 minutes.

Last Update: A few months ago

Site failing to load

Jun 1, 15:17 PDT Resolved - We'll continue to keep monitoring closely. If you see any further issues, please reach out to support@circleci.com.Jun 1, 14:53 PDT Monitoring - We're continuing to see the situation resolve and will be monitoring closely. We'll update again in ~20 min.Jun 1, 14:32 PDT Identified - We’ve implemented a solution and are beginning to see the situation resolve. We’ll continue to monitor to ensure it resolves fully. We’ll update again in ~20 min. Thanks for your patience.Jun 1, 13:59 PDT Update - We believe we’ve identified the issue with the site failing to load and are working toward a solution. Thanks for your patience, we’ll update again in ~20 minutes.Jun 1, 13:30 PDT Investigating - We're seeing further issues with the site failing to load. We're investigating.

Last Update: A few months ago

Site failing to load

Jun 1, 14:53 PDT Monitoring - We're continuing to see the situation resolve and will be monitoring closely. We'll update again in ~20 min.Jun 1, 14:32 PDT Identified - We’ve implemented a solution and are beginning to see the situation resolve. We’ll continue to monitor to ensure it resolves fully. We’ll update again in ~20 min. Thanks for your patience.Jun 1, 13:59 PDT Update - We believe we’ve identified the issue with the site failing to load and are working toward a solution. Thanks for your patience, we’ll update again in ~20 minutes.Jun 1, 13:30 PDT Investigating - We're seeing further issues with the site failing to load. We're investigating.

Last Update: A few months ago

Site failing to load

Jun 1, 14:32 PDT Identified - We’ve implemented a solution and are beginning to see the situation resolve. We’ll continue to monitor to ensure it resolves fully. We’ll update again in ~20 min. Thanks for your patience.Jun 1, 13:59 PDT Update - We believe we’ve identified the issue with the site failing to load and are working toward a solution. Thanks for your patience, we’ll update again in ~20 minutes.Jun 1, 13:30 PDT Investigating - We're seeing further issues with the site failing to load. We're investigating.

Last Update: A few months ago

Site failing to load

Jun 1, 13:59 PDT Update - We believe we’ve identified the issue with the site failing to load and are working toward a solution. Thanks for your patience, we’ll update again in ~20 minutes.Jun 1, 13:30 PDT Investigating - We're seeing further issues with the site failing to load. We're investigating.

Last Update: A few months ago

Site failing to load

Jun 1, 13:59 PDT Update - We believe we’ve identified the issue with the site failing to load and are working toward a solution. Thanks for your patience, we’ll update again in ~20 minutes.Jun 1, 13:30 PDT Investigating - We're seeing further issues with the site failing to load. We're investigating.

Last Update: A few months ago

Site failing to load

Jun 1, 13:30 PDT Investigating - We're seeing further issues with the site failing to load. We're investigating.

Last Update: A few months ago

Website not loading intermittently

Jun 1, 13:04 PDT Resolved - This incident has been resolved.Jun 1, 12:29 PDT Update - The website is now loading cleanly but we are continuing to monitor as AWS is now degraded. If you're still seeing anything amiss, please email support@Jun 1, 12:27 PDT Monitoring - The website is now loading cleanly but we are continuing to monitor as AWS is now degradedJun 1, 11:54 PDT Investigating - We're seeing a series of issues on the website manifesting as "Something doesn't look right...." and "Server is overloaded, please try later". We're investigating.

Last Update: A few months ago

Website not loading intermittently

Jun 1, 12:29 PDT Update - The website is now loading cleanly but we are continuing to monitor as AWS is now degraded. If you're still seeing anything amiss, please email support@Jun 1, 12:27 PDT Monitoring - The website is now loading cleanly but we are continuing to monitor as AWS is now degradedJun 1, 11:54 PDT Investigating - We're seeing a series of issues on the website manifesting as "Something doesn't look right...." and "Server is overloaded, please try later". We're investigating.

Last Update: A few months ago

Website not loading intermittently

Jun 1, 12:27 PDT Monitoring - The website is now loading cleanly but we are continuing to monitor as AWS is now degradedJun 1, 11:54 PDT Investigating - We're seeing a series of issues on the website manifesting as "Something doesn't look right...." and "Server is overloaded, please try later". We're investigating.

Last Update: A few months ago

Website not loading intermittently

Jun 1, 11:54 PDT Investigating - We're seeing a series of issues on the website manifesting as "Something doesn't look right...." and "Server is overloaded, please try later". We're investigating.

Last Update: A few months ago

Website not loading intermittently

Jun 1, 11:54 PDT Investigating - We're seeing a series of issues on the website manifesting as "Something doesn't look right...." and "Server is overloaded, please try later". We're investigating.

Last Update: A few months ago

Artifacts "invalid request!"

May 27, 13:53 PDT Resolved - Artifacts URLs are now working again. Thanks for your patience!May 27, 13:42 PDT Identified - Artifacts URLS are erroneously responding with "invalid request!". We're rolling back.

Last Update: A few months ago

Artifacts "invalid request!"

May 27, 13:42 PDT Identified - Artifacts URLS are erroneously responding with "invalid request!". We're rolling back.

Last Update: A few months ago

GitHub hooks

May 23, 11:30 PDT Resolved - The GitHub notifications appear to be back to normal, we will continue to monitor if the status changes again. Thanks for your patience.May 23, 10:37 PDT Update - Github notifications are flowing again and we are keeping pace with demand. We are actively monitoring Github's status and will sound the all clear when they do.May 23, 08:55 PDT Update - Incoming notification volume from GitHub continues to fluctuate. We're making sure we have capacity to handle the upticks. More information at https://status.github.com/May 23, 08:44 PDT Update - We're seeing a steady increase in incoming notifications from GitHub as they effect their repairs. Up to date status available at https://status.github.com/May 23, 08:11 PDT Monitoring - GitHub is having an issue sending notifications of pushes. We're making sure to have capacity to quickly clear the backlog once it has been restored, see https://status.github.com/ for up to date information.

Last Update: A few months ago

GitHub hooks

May 23, 10:37 PDT Update - Github notifications are flowing again and we are keeping pace with demand. We are actively monitoring Github's status and will sound the all clear when they do.May 23, 08:55 PDT Update - Incoming notification volume from GitHub continues to fluctuate. We're making sure we have capacity to handle the upticks. More information at https://status.github.com/May 23, 08:44 PDT Update - We're seeing a steady increase in incoming notifications from GitHub as they effect their repairs. Up to date status available at https://status.github.com/May 23, 08:11 PDT Monitoring - GitHub is having an issue sending notifications of pushes. We're making sure to have capacity to quickly clear the backlog once it has been restored, see https://status.github.com/ for up to date information.

Last Update: A few months ago

GitHub hooks

May 23, 08:55 PDT Update - Incoming notification volume from GitHub continues to fluctuate. We're making sure we have capacity to handle the upticks. More information at https://status.github.com/May 23, 08:44 PDT Update - We're seeing a steady increase in incoming notifications from GitHub as they effect their repairs. Up to date status available at https://status.github.com/May 23, 08:11 PDT Monitoring - GitHub is having an issue sending notifications of pushes. We're making sure to have capacity to quickly clear the backlog once it has been restored, see https://status.github.com/ for up to date information.

Last Update: A few months ago

GitHub hooks

May 23, 08:44 PDT Update - We're seeing a steady increase in incoming notifications from GitHub as they effect their repairs. Up to date status available at https://status.github.com/May 23, 08:11 PDT Monitoring - GitHub is having an issue sending notifications of pushes. We're making sure to have capacity to quickly clear the backlog once it has been restored, see https://status.github.com/ for up to date information.

Last Update: A few months ago

GitHub hooks

May 23, 08:11 PDT Monitoring - GitHub is having an issue sending notifications of pushes. We're making sure to have capacity to quickly clear the backlog once it has been restored, see https://status.github.com/ for up to date information.

Last Update: A few months ago

GitHub hooks

May 23, 08:11 PDT Monitoring - GitHub are having an issue sending us notifications of pushes. We're making sure we have capacity to quickly clear the backlog once they come back, see https://status.github.com/ for up to date information.

Last Update: A few months ago

Artifacts service is down

May 18, 22:24 PDT Resolved - We've rolled back and don't anticipate further issues.May 18, 22:08 PDT Identified - A deploy error caused the artifacts service to go down. Requests to circle-artifacts.com will 503. We're rolling back.

Last Update: A few months ago

Artifacts service is down

May 18, 22:08 PDT Identified - A deploy error caused the artifacts service to go down. Requests to circle-artifacts.com will 503. We're rolling back.

Last Update: A few months ago

Run Queue Increase

May 18, 12:12 PDT Resolved - Build dispatching is happening as usual, thanks for your patience while we dealt with the event.May 18, 11:48 PDT Monitoring - Build dispatching has resumed. We are monitoring the situation.May 18, 11:45 PDT Investigating - We are seeing an increase in the run queue and are looking into why

Last Update: A few months ago

Run Queue Increase

May 18, 11:48 PDT Monitoring - Build dispatching has resumed. We are monitoring the situation.May 18, 11:45 PDT Investigating - We are seeing an increase in the run queue and are looking into why

Last Update: A few months ago

Run Queue Increase

May 18, 11:45 PDT Investigating - We are seeing an increase in the run queue and are looking into why

Last Update: A few months ago

Run Queue Increase

May 17, 19:42 PDT Resolved - We have not seen any run queueing during the past hour. Thank you for your patience during this event.May 17, 18:16 PDT Monitoring - The queue has been cleared, and we’re continuing to monitor closely. If you’re seeing something that doesn’t look right, please let us know in-app or at support@ and we’ll take a look. Thanks for your patienceMay 17, 18:03 PDT Update - We’re continuing to scale up containers to meet demand and drain the queue. Will update again in ~25 min.May 17, 17:37 PDT Update - We’ve deployed new instances to work through the queue, which is draining. We'll update again in ~20 min.May 17, 17:06 PDT Identified - An AWS network event has caused our builders to drop capacity, will update again in 20 minutes.May 17, 16:51 PDT Investigating - The run queue has increased dramatically and we are exploring what the cause is. We will update in 20 minutes.

Last Update: A few months ago

Run Queue Increase

May 17, 18:16 PDT Monitoring - The queue has been cleared, and we’re continuing to monitor closely. If you’re seeing something that doesn’t look right, please let us know in-app or at support@ and we’ll take a look. Thanks for your patienceMay 17, 18:03 PDT Update - We’re continuing to scale up containers to meet demand and drain the queue. Will update again in ~25 min.May 17, 17:37 PDT Update - We’ve deployed new instances to work through the queue, which is draining. We'll update again in ~20 min.May 17, 17:06 PDT Identified - An AWS network event has caused our builders to drop capacity, will update again in 20 minutes.May 17, 16:51 PDT Investigating - The run queue has increased dramatically and we are exploring what the cause is. We will update in 20 minutes.

Last Update: A few months ago

Run Queue Increase

May 17, 18:03 PDT Update - We’re continuing to scale up containers to meet demand and drain the queue. Will update again in ~25 min.May 17, 17:37 PDT Update - We’ve deployed new instances to work through the queue, which is draining. We'll update again in ~20 min.May 17, 17:06 PDT Identified - An AWS network event has caused our builders to drop capacity, will update again in 20 minutes.May 17, 16:51 PDT Investigating - The run queue has increased dramatically and we are exploring what the cause is. We will update in 20 minutes.

Last Update: A few months ago

Run Queue Increase

May 17, 17:37 PDT Update - We’ve deployed new instances to work through the queue, which is draining. We'll update again in ~20 min.May 17, 17:06 PDT Identified - An AWS network event has caused our builders to drop capacity, will update again in 20 minutes.May 17, 16:51 PDT Investigating - The run queue has increased dramatically and we are exploring what the cause is. We will update in 20 minutes.

Last Update: A few months ago

Run Queue Increase

May 17, 17:06 PDT Identified - An AWS network event has caused our builders to drop capacity, will update again in 20 minutes.May 17, 16:51 PDT Investigating - The run queue has increased dramatically and we are exploring what the cause is. We will update in 20 minutes.

Last Update: A few months ago

Run Queue Increase

May 17, 16:51 PDT Investigating - The run queue has increased dramatically and we are exploring what the cause is. We will update in 20 minutes.

Last Update: A few months ago

Plan Configuration

May 5, 14:46 PDT Resolved - Further investigation has uncovered that this was an internal incident only, and no customers were affected. If you see anything that looks amiss, please reach out to us at support@. Thanks for your patience.May 5, 14:26 PDT Update - The source we identified as the issue appears not to be the root cause. We’re continuing to actively debug. We’ll update again in ~20 min. Thanks for your patience.May 5, 13:34 PDT Identified - The source of the issue has been identified and we are deploying the fixMay 5, 13:24 PDT Investigating - We are exploring an issue affecting customers' ability to configure plans and settings

Last Update: A few months ago

Plan Configuration

May 5, 14:26 PDT Update - The source we identified as the issue appears not to be the root cause. We’re continuing to actively debug. We’ll update again in ~20 min. Thanks for your patience.May 5, 13:34 PDT Identified - The source of the issue has been identified and we are deploying the fixMay 5, 13:24 PDT Investigating - We are exploring an issue affecting customers' ability to configure plans and settings

Last Update: A few months ago

Plan Configuration

May 5, 13:34 PDT Identified - The source of the issue has been identified and we are deploying the fixMay 5, 13:24 PDT Investigating - We are exploring an issue affecting customers' ability to configure plans and settings

Last Update: A few months ago

Plan Configuration

May 5, 13:24 PDT Investigating - We are exploring an issue affecting customers' ability to configure plans and settings

Last Update: A few months ago

Builds are queuing

Apr 21, 17:36 PDT Resolved - The queue is back to normal. Thank you for your patience as we dealt with the issues from the AWS outage.Apr 21, 17:10 PDT Update - We are continuing to push builds and are monitoring the system. Will update in ~20 minutes.Apr 21, 16:19 PDT Update - We are continuing to push blocked builds while still monitoring the system. Will update in ~20 minutes.Apr 21, 15:55 PDT Update - The initial build queue has been cleared, and we’re working on pushing any builds that are blocked on plan caps. We’re continuing to monitor closely and will update again in ~20 minutes.Apr 21, 15:28 PDT Monitoring - The backlog is cleared, we're continuing to monitor closely.Apr 21, 15:20 PDT Update - AWS us-east-1 experienced a network outage which impacted our system. We have scaled up and are currently still working through the backlog this created. We will update again in ~20 min.Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 17:10 PDT Update - We are continuing to push builds and are monitoring the system. Will update in ~20 minutes.Apr 21, 16:19 PDT Update - We are continuing to push blocked builds while still monitoring the system. Will update in ~20 minutes.Apr 21, 15:55 PDT Update - The initial build queue has been cleared, and we’re working on pushing any builds that are blocked on plan caps. We’re continuing to monitor closely and will update again in ~20 minutes.Apr 21, 15:28 PDT Monitoring - The backlog is cleared, we're continuing to monitor closely.Apr 21, 15:20 PDT Update - AWS us-east-1 experienced a network outage which impacted our system. We have scaled up and are currently still working through the backlog this created. We will update again in ~20 min.Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 16:19 PDT Update - We are continuing to push blocked builds while still monitoring the system. Will update in ~20 minutes.Apr 21, 15:55 PDT Update - The initial build queue has been cleared, and we’re working on pushing any builds that are blocked on plan caps. We’re continuing to monitor closely and will update again in ~20 minutes.Apr 21, 15:28 PDT Monitoring - The backlog is cleared, we're continuing to monitor closely.Apr 21, 15:20 PDT Update - AWS us-east-1 experienced a network outage which impacted our system. We have scaled up and are currently still working through the backlog this created. We will update again in ~20 min.Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 15:55 PDT Update - The initial build queue has been cleared, and we’re working on pushing any builds that are blocked on plan caps. We’re continuing to monitor closely and will update again in ~20 minutes.Apr 21, 15:28 PDT Monitoring - The backlog is cleared, we're continuing to monitor closely.Apr 21, 15:20 PDT Update - AWS us-east-1 experienced a network outage which impacted our system. We have scaled up and are currently still working through the backlog this created. We will update again in ~20 min.Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 15:28 PDT Monitoring - The backlog is cleared, we're continuing to monitor closely.Apr 21, 15:20 PDT Update - AWS us-east-1 experienced a network outage which impacted our system. We have scaled up and are currently still working through the backlog this created. We will update again in ~20 min.Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 15:20 PDT Update - AWS us-east-1 experienced a network outage which impacted our system. We have scaled up and are currently still working through the backlog this created. We will update again in ~20 min.Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 15:02 PDT Identified - We believe we've fixed the issue but we have a large backlog of builds. We're scaling up to meet the new demand but it will take some time. We'll update again in ~20 minutes.Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Builds are queuing

Apr 21, 15:00 PDT Investigating - Our system is having trouble dequeueing builds, despite available capacity. We're working on it and will update once we have more info.

Last Update: A few months ago

Artifacts outage

Apr 14, 23:24 PDT Resolved - New artifacts servers are up and fully functional. We'll continue to monitor but we don't anticipate any further issues.Apr 14, 23:20 PDT Identified - A bug in our routine deploy process brought the artifacts http servers down. We're redeploying manually and will be back to normal shortly.

Last Update: A few months ago

Artifacts outage

Apr 14, 23:20 PDT Identified - A bug in our routine deploy process brought the artifacts http servers down. We're redeploying manually and will be back to normal shortly.

Last Update: A few months ago

Post GitHub Outage backlog recovery

Apr 5, 15:48 PDT Resolved - Queue is back to normal levelsApr 5, 15:17 PDT Monitoring - We are working on clearing the backlog now that GitHub is sending commits and PRs

Last Update: A few months ago

Post GitHub Outage backlog recovery

Apr 5, 15:17 PDT Monitoring - We are working on clearing the backlog now that GitHub is sending commits and PRs

Last Update: A few months ago

OS X Fleet network maintenance

Mar 25, 23:56 PDT Completed - The scheduled maintenance has been completed.Mar 25, 20:00 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 25, 19:18 PDT Scheduled - One of two of our OS X Fleet data-centers is having network maintenance - for details visit

Last Update: A few months ago

OS X Fleet network maintenance

Mar 25, 20:00 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 25, 19:18 PDT Scheduled - One of two of our OS X Fleet data-centers is having network maintenance - for details visit

Last Update: A few months ago

OS X Fleet network maintenance

Mar 25, 19:18 PDT Scheduled - One of two of our OS X Fleet data-centers is having network maintenance - for details visit

Last Update: A few months ago

Adding environment variables in project settings unavailable

Mar 22, 13:30 PDT Resolved - The fix in place seems to be sufficient, let us know if you see any further issues!Mar 22, 11:41 PDT Monitoring - We have resolved the issue with en vars. We will continue to monitor closely. Thank you for your patience. If you see anything that doesn’t look right, please email us at support@ and we’ll follow up.Mar 22, 11:16 PDT Identified - We’ve identified the cause behind the en vars issue and are working on a fix. Thanks for your patience. Will update again in ~20 minutes.Mar 22, 10:57 PDT Update - We have possibly identified the cause behind the en vars issue and are validating. We will update again in ~20 minutes.Mar 22, 10:23 PDT Investigating - Adding en vars in project settings currently does not work for all projects. We are investigating the issue and will update as soon as we know more.

Last Update: A few months ago

Build output failing to be written to the console

Mar 18, 09:41 PDT Resolved - This incident has been resolved.Mar 18, 08:57 PDT Monitoring - We had an issue with the queue where we store output for processing. The issue is resolved but some build output is lost. We've added extra capacity for you to rerun any builds as necessary.Mar 18, 08:22 PDT Investigating - Build output is failing to be written to the console. We are investigating the cause and will update as soon as we know more.

Last Update: A few months ago

Project Permissions

Mar 7, 14:13 PDT Resolved - We have not seen any new occurrences of this issue. Please do reach out to our support team if this happens again.Mar 7, 11:51 PDT Monitoring - We have implemented a solution and are adding more logging to track new occurances. Please do reach out to support if this happens again.Mar 7, 11:28 PDT Update - We have identified a possible source of the permission issue and are working to confirm it. We will provide an update in the next 20 minutes.Mar 7, 11:07 PDT Investigating - We are exploring some projects not being able to see their repositories from the CircleCI.com UI - please contact us if this is happening to you.

Last Update: A few months ago

Priority DB Maintenance

Mar 3, 21:33 PDT Completed - The event is over and was performed as planned. Thank you for your patience.Mar 3, 20:00 PDT In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 3, 17:25 PDT Scheduled - Between 0400-0600 UTC on 2016-03-04, we will be replacing a failing database host. The cutover should be transparent, but may impact some in-flight db operations. We will have extra engineers as well as extra capacity on hand in case of any issues.

Last Update: A few months ago

Some users unable to open support tickets

Mar 3, 15:58 PDT Resolved - We have resolved the issue and the in-app support button should be fully functional. Thank you for your patience. If you see anything that doesn’t look right, please email us at sayhi@ and we will follow upMar 3, 15:02 PDT Update - We're still in active investigation with our support ticket providers. Continue using support@circleci.com to open support tickets. We will update again when we know more.Mar 3, 13:58 PDT Update - We're continuing to speak with the provider of our support tickets. Please use support@circleci.com to submit support tickets. We're working to resolve this issue as quickly as we can. Thank you for your patience.Mar 3, 12:57 PDT Update - We're continuing to work with the provider of our support tickets. Please continue to reach out to support@circleci.com to open tickets. We will update as soon as we have more information.Mar 3, 12:17 PDT Update - We're continuing to work with the service that is responsible for our support tickets to identify the issue with opening tickets in-app. Please continue to use support@circleci.com to open a support ticket. Apologies for the inconvenience. We'll update again in ~30 min.Mar 3, 11:46 PDT Investigating - We are currently experiencing an issues with our support service, and some users cannot open a support ticket in-app. We apologize for any inconvenience. Please send your requests to support@circleci.com to open a ticket. We will update again in ~30 min.

Last Update: A few months ago

Builds are queueing

Feb 29, 12:23 PDT Resolved - The build queue has returned to normal. The incident is closed and our performance is back to fully operational.Feb 29, 12:17 PDT Monitoring - Build queue issue resolved, we are monitoring the recovery.Feb 29, 12:13 PDT Identified - The issue has been identified and a fix is being implemented.Feb 29, 12:11 PDT Investigating - We’re seeing builds queuing for almost all customers. We’ve are working to remedy the issue. We will update again in ~20 min.

Last Update: A few months ago

Delayed test results processing

Feb 28, 15:37 PDT Resolved - Things are stable and processing as expected.Feb 28, 15:23 PDT Monitoring - The issue has been resolved and only a small subset of builds were affected. Monitoring to ensure it stays stable.Feb 28, 14:09 PDT Investigating - We're noticing increased latency in processing test results. Test results my take longer than normal to appear in the "Test Results" tab of the build page. We're investigating the underlying cause.

Last Update: A few months ago

Real-time page updates unavailable

Feb 25, 04:55 PDT Resolved - This incident has been resolved.Feb 25, 04:25 PDT Monitoring - We've rolled a fix out across our fleet. Real-time updates are streaming correctly again.Feb 25, 04:00 PDT Update - The fleet has been partially re-deployed with the fix, we are continuing to phase out the faulty instances and replace them with functional ones.Feb 25, 03:31 PDT Identified - We've identified an issue pushing real-time command output and build status updates to our frontend. The output and updates are still saved and are fully available after the build finishes. Refreshing the page will load the latest data. We are working towards a resolution and will update again in ~30 minutes.

Last Update: A few months ago

Unable to open support tickets

Feb 20, 19:30 PDT Resolved - Issues with our support service have been resolved. Support requests can now be submitted normally.Feb 20, 11:53 PDT Update - We are currently experiencing issues with our support services. We apologize for any inconvenience. Please send your requests to support@circleci.com.Feb 19, 23:21 PDT Investigating - We are having an issue with the service that is responsible for support tickets. You can still send email to support@circleci.com to open support tickets.

Last Update: A few months ago

High build queue

Feb 18, 23:24 PDT Resolved - Queue remains clear and builds are running normally.Feb 18, 22:44 PDT Monitoring - Queue has drained. Keeping capacity high and monitoring to ensure it stays that way. Update in ~30 mins.Feb 18, 22:25 PDT Update - We're continuing to drain the queue and ramp up capacity. Next update in ~20 mins.Feb 18, 22:10 PDT Update - Issue is resolved and queue is draining. We are bringing up more capacity to accelerate recovery. Next update in ~20 mins.Feb 18, 21:58 PDT Identified - We've identified the issue and working to remedy as fast as we can. Will update in ~20 mins.Feb 18, 20:36 PDT Investigating - The build queue is very high. We're working on recovering.

Last Update: A few months ago

GitHub outage

Feb 17, 14:01 PDT Resolved - GitHub is available again, but are experiencing a DDOS attack. We'll update if we notice any impact on CircleCI service.Feb 17, 13:53 PDT Monitoring - GitHub is experiencing an outage. Our API calls to GitHub may fail, and we may miss build hooks. We're monitoring and we'll update once we have specifics.

Last Update: A few months ago

Backlog processing test results

Feb 12, 07:37 PDT Resolved - We have fully recovered – all queues are clear.Feb 12, 07:32 PDT Identified - Our processing of JUnit-style test results is back-logged. Builds are not affected, but rendering of the test-failures tab for recent builds will be delayed until we can clear the queue. We have fixed the issue and we will be up-to-date shortly.

Last Update: A few months ago

Builds are queueing

Feb 3, 15:59 PDT Resolved - The queue remains clear. We will continue to monitor closely. If you see anything amiss, please reach out to sayhi@circleci.com and we'll help get you sorted.Feb 3, 15:22 PDT Monitoring - The queue has drained. We will continue to monitor very closely, and will update with any new information. Thanks for your patience.Feb 3, 15:07 PDT Identified - We've identified a slowdown in queue processing, have corrected, and are adding capacity to help clear the backlog. We'll update with more information in ~20 minutes.Feb 3, 14:59 PDT Investigating - Builds are queueing. We are looking into it and will have more information as soon as we can.

Last Update: A few months ago

Github experiencing network disruption

Jan 27, 20:48 PDT Resolved - We’re continuing to see things moving smoothly as GitHub resolves. Thank you for your patience!Jan 27, 20:17 PDT Update - We’re seeing things moving smoothly. Some webhooks may be delayed as GitHub continues to recover, but we’re continuing to monitor GitHub’s status closely and will update again in ~30 min unless something of note occurs.Jan 27, 19:49 PDT Update - We're continuing to receive some webhooks from GitHub, but still expect to see more coming through as Github continues to recover. We’re continuing to monitor closely and will update again in ~20 minJan 27, 19:28 PDT Update - We’ve received some webhooks from GitHub, but expect to see more coming through as Github continues to recover. We’re continuing to monitor and will update again in ~20 min.Jan 27, 19:03 PDT Monitoring - We’re continuing to see webhooks as GitHub continues to recover. We’ve monitoring extremely closely. We'll update again in ~20 min.Jan 27, 18:39 PDT Update - We’re starting to see webhooks again following GitHub’s change in status to ‘recovering’. We’re monitoring closely as they come back online. We'll update again in ~20 min.Jan 27, 18:18 PDT Update - We are still unable to serve pages that require GitHub project-level authorization. We’re continuing to monitor the GitHub outage and are ready to respond once it has cleared. We will update again in ~20 min.Jan 27, 17:58 PDT Update - We are currently unable to serve pages that require GitHub project-level authorization. We are monitoring and ready to respond once the GitHub outage has passed.Jan 27, 17:34 PDT Update - We are currently unable to serve pages that require GitHub project-level authorization. We will update again in ~20 min.Jan 27, 17:13 PDT Identified - Github are experiencing an outage. We're scaling up to clear the load as fast as we can as soon as they're back up. Will update again once we've heard more from Github.

Last Update: A few months ago

High queue times.

Jan 25, 17:22 PDT Resolved - Latency remains low and the build queue is empty. We'll continue to monitor closely.Jan 25, 16:38 PDT Update - Latency is still low. We're monitoring very closely and will update again in ~30 minutes.Jan 25, 16:11 PDT Monitoring - The queue is continuing to drain. We’re focusing on maintaining the low latency while digging further into root cause.Jan 25, 15:44 PDT Update - We've reduced the queue, but we're still not back to full speed. We're continuing to explore solutions while also investigating root cause. We'll update again in ~20 min.Jan 25, 15:14 PDT Update - We're still seeing builds queueing and working diligently to discover the root cause. We will update again in ~20 min.Jan 25, 14:46 PDT Identified - We're seeing high queue times. We've scaled up to address the increased load and queue times are shortening. We're investigating how to make it shorten more quickly.

Last Update: A few months ago

Artifacts unavailability

Jan 25, 05:12 PDT Resolved - Artifacts are back up to full availabilityJan 25, 04:56 PDT Identified - We've located the source of artifacts unavailability and are deploying a fix nowJan 25, 03:40 PDT Investigating - We are investigating an availability issue with accessing artifacts.

Last Update: A few months ago

Build output not showing.

Jan 22, 13:38 PDT Resolved - This incident has been resolved.Jan 22, 12:25 PDT Monitoring - Code has been rolled back and output is displaying. We're monitoring for any further issues.Jan 22, 11:54 PDT Identified - We've noticed an issue where builds don't display their output. We're rolling back.

Last Update: A few months ago

Investigating build failures

Jan 18, 12:08 PDT Resolved - We have resolved the issue and builds are running normally at this time.Jan 18, 11:13 PDT Update - We're continuing to scale to meet demand, and closely monitoring. Thanks for your patience. We'll update if anything changes.Jan 18, 10:47 PDT Monitoring - We have identified and fixed the build failures issue. We are scaling up extra boxes to help drain the queue, and closely monitoring.Jan 18, 10:24 PDT Update - We're still investigating the cause of the build failures. Getting this resolved is our top priority, we'll update further as we know more.Jan 18, 09:58 PDT Investigating - We're currently seeing build failures. We're investigating the cause and will update as soon as we know more.

Last Update: A few months ago

Builds queueing

Jan 17, 20:47 PDT Resolved - Builds are running normally again.Jan 17, 19:50 PDT Monitoring - A fix has been implemented and we are monitoring the results.Jan 17, 19:11 PDT Identified - We have identified the issue that was preventing builds from being queued and are monitoring.Jan 17, 18:16 PDT Investigating - Builds are queueing. We are looking into it and will have more information as soon as we can.

Last Update: A few months ago

OSX Builds Queueing

Jan 14, 17:27 PDT Resolved - Queue remains empty. We'll continue to watch things closely. Please reach out to support if you see anything that's not quite right.Jan 14, 16:52 PDT Monitoring - The queue is cleared, we’ll continue to monitor closely.Jan 14, 16:32 PDT Identified - We’ve identified the sub-system responsible for the issue. We have a possible solution implemented, and are currently monitoring. Will update again in ~30 min.Jan 14, 16:06 PDT Investigating - OSX builds are queueing, we are looking into it and will have more information as soon as we can.

Last Update: A few months ago

Intermittent Issues with Artifacts

Jan 12, 20:43 PDT Resolved - We've discovered the underlying issue and fixed it. We will continue to monitor closely. If you see any issues with artifacts please ping support at sayhi@ and we'll dig in.Jan 12, 17:07 PDT Update - We're monitoring very closely and will continue to do so as we near a permanent solution. Again, thanks for your patience and if you see lengthy issues with artifacts please ping support at sayhi@ and we'll help get things sorted.Jan 12, 14:49 PDT Monitoring - We’ve rolled out a longer-term fix which has significantly reduced incidences of build artifacts being unavailable. We are continuing to monitor and work toward a permanent fix. If you experience lengthy issues with artifacts, please don’t hesitate to reach out to us at sayhi@ and we’ll help get you sorted.Jan 12, 14:05 PDT Update - Still seeing some artifacts issues, but incidence is down and we're continuing to monitor and try solutions to find a permanent fix. Will update as we know more.Jan 12, 13:15 PDT Update - We're continuing to work toward a permanent fix for the artifacts issue we're seeing, and we're making progress. We appreciate your patience and we'll update again in ~1 hour.Jan 12, 11:59 PDT Update - We’re continuing to work on a permanent fix for artifacts but seeing intermittent issues. Will update again in ~1 hour.Jan 12, 10:19 PDT Identified - We're still seeing some intermittent issues with artifacts. We have a temporary fix in place, but service may be unreliable until we have a permanent fix, which we're working on now.

Last Update: A few months ago

Artifacts are not accessible

Jan 12, 01:59 PDT Resolved - This incident has been resolved.Jan 12, 01:37 PDT Monitoring - The issue has been identified and resolved, but we're keeping a close eye on it.Jan 12, 00:54 PDT Investigating - We are having some issues with artifacts service and currently you cannot view/download artifacts. Uploading to artifacts in builds is working fine.

Last Update: A few months ago

Builds queueing

Dec 17, 20:13 PDT Resolved - Builds are running normally again.Dec 17, 19:44 PDT Monitoring - We have implemented a fix and builds should no longer be queueing. We will continue to monitor to ensure that everything remains fully operational. Next update in 30 minutes.Dec 17, 19:32 PDT Identified - The issue has been identified, we are working on implementing a fix. Next update in 15 minutes.Dec 17, 19:23 PDT Investigating - Builds are queueing. We are now investigating.

Last Update: A few months ago

OSX System Restart

Dec 1, 08:27 PDT Resolved - We have completed the reboot. All systems are fully operational again.Dec 1, 08:23 PDT Monitoring - We are restarting a database that controls OSX builds. Some OSX builds might queue for a few minutes.

Last Update: A few months ago

iOS builds currently queueing

Nov 25, 15:53 PDT Resolved - This incident has been resolved.Nov 25, 15:22 PDT Update - The queue is empty, and we're continuing to monitor for any further issues.Nov 25, 15:09 PDT Monitoring - We’ve identified the issue and the queue is draining. We’ll continue to monitor closely, but expect this to be completely resolved shortly.Nov 25, 14:42 PDT Investigating - Currently investigating a backlog of iOS builds. Update in 30 mins.

Last Update: A few months ago

© 2018 - Cloudstatus built by jameskenny