Travis-CI Status

Builds not starting on travis-ci.org (open source)

Oct 28, 12:08 UTC Resolved - Builds are processing normally. The queue for container based builds has cleared. We will continue to monitor the situation. We apologize for the delays in picking up your builds, and thank you for your patience.Oct 28, 11:56 UTC Update - Builds are running again on travis-ci.org. One of the nodes in our RabbitMQ cluster failed. We performed a failover and restarted the node, which recovered service for that RabbitMQ cluster. We have scaled up additional resources for build processing, and are working through a backlog for container based builds. We are continuing to monitor the situation.Oct 28, 10:35 UTC Investigating - Builds seem to be not starting on travis-ci.org (open source). We are investigating the issue.

Last Update: About 20 days ago

Builds not starting on travis-ci.org (open source)

Oct 28, 11:56 UTC Update - Builds are running again on travis-ci.org. One of the nodes in our RabbitMQ cluster failed. We performed a failover and restarted the node, which recovered service for that RabbitMQ cluster. We have scaled up additional resources for build processing, and are working through a backlog for container based builds. We are continuing to monitor the situation.Oct 28, 10:35 UTC Investigating - Builds seem to be not starting on travis-ci.org (open source). We are investigating the issue.

Last Update: About 20 days ago

Builds not starting on travis-ci.org (open source)

Oct 28, 10:35 UTC Investigating - Builds seem to be not starting on travis-ci.org (open source). We are investigating the issue.

Last Update: About 20 days ago

Connection issues with travis-ci.com

Oct 24, 12:25 UTC Resolved - This issue has been resolved.Oct 24, 09:43 UTC Monitoring - The intermittent DNS issues that were causing problems while accessing travis-ci.com seem to be resolved now. We have escalated the issue to an upstream provider to investigate further and we continue monitoring the situation.Oct 24, 09:34 UTC Investigating - We're investigating possible DNS problems accessing travis-ci.com.

Last Update: About 24 days ago

Connection issues with travis-ci.com

Oct 24, 09:43 UTC Monitoring - The intermittent DNS issues that were causing problems while accessing travis-ci.com seem to be resolved now. We have escalated the issue to an upstream provider to investigate further and we continue monitoring the situation.Oct 24, 09:34 UTC Investigating - We're investigating possible DNS problems accessing travis-ci.com.

Last Update: About 24 days ago

Connection issues with travis-ci.com

Oct 24, 09:34 UTC Investigating - We're investigating possible DNS problems accessing travis-ci.com.

Last Update: About 24 days ago

Delayed build status updates

Oct 23, 22:29 UTC Resolved - This incident has been resolved.Oct 23, 21:52 UTC Investigating - We are investigating delays in build status updates for repositories on travis-ci.com.

Last Update: About 24 days ago

Delayed build status updates

Oct 23, 21:52 UTC Investigating - We are investigating delays in build status updates for repositories on travis-ci.com.

Last Update: About 24 days ago

Missing/delayed builds and git clone errors

Oct 22, 23:35 UTC Resolved - This incident has been resolved.Oct 22, 20:16 UTC Update - We have caught up with the container build back log.Oct 22, 19:23 UTC Update - We are seeing delays in all infrastructures due to build request back logs.Oct 22, 18:29 UTC Update - We are processing build requests. Container builds (`sudo: false`) may experience some additional delay.Oct 22, 17:37 UTC Update - We are still waiting for GitHub to resume webhooks delivery. Thanks for hanging in there with us.Oct 22, 15:19 UTC Update - As stated on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 20:16 UTC Update - We have caught up with the container build back log.Oct 22, 19:23 UTC Update - We are seeing delays in all infrastructures due to build request back logs.Oct 22, 18:29 UTC Update - We are processing build requests. Container builds (`sudo: false`) may experience some additional delay.Oct 22, 17:37 UTC Update - We are still waiting for GitHub to resume webhooks delivery. Thanks for hanging in there with us.Oct 22, 15:19 UTC Update - As stated on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 19:23 UTC Update - We are seeing delays in all infrastructures due to build request back logs.Oct 22, 18:29 UTC Update - We are processing build requests. Container builds (`sudo: false`) may experience some additional delay.Oct 22, 17:37 UTC Update - We are still waiting for GitHub to resume webhooks delivery. Thanks for hanging in there with us.Oct 22, 15:19 UTC Update - As stated on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 18:29 UTC Update - We are processing build requests. Container builds (`sudo: false`) may experience some additional delay.Oct 22, 17:37 UTC Update - We are still waiting for GitHub to resume webhooks delivery. Thanks for hanging in there with us.Oct 22, 15:19 UTC Update - As stated on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 17:37 UTC Update - We are still waiting for GitHub to resume webhooks delivery. Thanks for hanging in there with us.Oct 22, 15:19 UTC Update - As stated on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 15:19 UTC Update - As stated on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 15:19 UTC Update - As stated, on GitHub's status page (https://status.github.com/messages), webhooks delivery is still paused and, as such, we cannot trigger new builds yet.Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 13:42 UTC Update - We are still waiting for GitHub to restart sending webhooks for repository/commit events. We will post an update as soon as this happens. Thank you for your patience.Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 25 days ago

Missing/delayed builds and git clone errors

Oct 22, 12:12 UTC Monitoring - GitHub is currently experiencing a service outage (https://status.github.com/messages) and we are not receiving repository/commit events. Users might experience missing or delayed Travis CI builds and GitHub authentication errors while trying to log in to our application or while cloning repositories during their jobs.

Last Update: About 26 days ago

Build delays sudo-enabled builds on travis-ci.com

Jun 14, 16:59 UTC Resolved - This incident has been resolved.Jun 14, 15:55 UTC Update - We are continuing to investigate this issue.Jun 14, 15:55 UTC Investigating - We are seeing some build delays for some sudo-enabled builds on travis-ci.com. We are looking into it.

Last Update: About 26 days ago

Delays on sudo-enabled infrastructure for open-source builds

Jun 25, 18:15 UTC Resolved - Most of the issues have been resolved. We are aware that users will still experience occasional delays when starting builds and will be continuing to work on this.Jun 25, 16:40 UTC Update - We have addressed some of the issues contributing to the delays and are continuing to investigate the remaining symptoms.Jun 25, 16:09 UTC Investigating - We are currently aware of issues leading to build delays on our sudo-enabled infrastructure for https://travis-ci.org (i.e. for open-source builds), and are investigating.

Last Update: About 26 days ago

Errors activating/deactivating repositories on Travis CI

Jun 29, 09:35 UTC Resolved - We have released a fix and activating/deactivating repositories should be possible again. Please get in contact with us at support@travis-ci.com if you’re still experiencing any issues.Jun 29, 09:10 UTC Identified - The issue has been identified and we're releasing a fix. Thank you for your patience.Jun 29, 08:33 UTC Investigating - Users are currently unable to activate/deactivate GitHub repositories on Travis CI through the Legacy Services Integration. We’re aware of the issue and are working to identify the reason. Managing repositories via GitHub Apps is not affected by this problem.

Last Update: About 26 days ago

xcode6.4 images unavailable for macOS builds

Jul 15, 14:58 UTC Resolved - The xcode6.4 image for macOS builds has been restored. Thank you for your patience.Jul 14, 15:36 UTC Update - The malfunctioning xcode6.4 image issue for macOS builds has been escalated to our infrastructure provider. macOS jobs specifying xcode6.4 in the .travis.yml will failover to xcode8.3(the default xcode). If your testing needs require xcode6.4 specifically, please let us know at support@travis-ci.org. We'll continue to update the status.Jul 13, 23:38 UTC Identified - Due to errors in configuration, osx_image: xcode6.4 in the macOS builds is temporarily unavailable. Please use another xcode image in your .travis.yml. https://docs.travis-ci.com/user/reference/osx/#OS-X-Version. As this is an older image, those who must use the xcode6.4 because they are targeting OS X 10.10, please contact support@travis-ci.com so we are aware of the scope of the issue. Thank you.

Last Update: About 26 days ago

Build delays on .org

Jul 22, 20:03 UTC Resolved - Builds are processing normally on .org.Jul 22, 19:39 UTC Investigating - We are currently investigating an issue where builds are delayed on travis-ci.org.

Last Update: About 26 days ago

Builds failing due to an error in the .travis.yml file

Jul 30, 14:46 UTC Resolved - This incident has been resolved.Jul 30, 13:43 UTC Update - We have observed no further occurrences of this issue, and are now considering it resolved.Jul 30, 12:14 UTC Monitoring - An issue in the communication with an upstream provider has been addressed. We are monitoring the health of the system. Thanks for your patience.Jul 30, 11:21 UTC Update - From our investigation, these build errors seem to be related to problems while retrieving the .travis.yml file of the repositories and is not caused by an issue with the contents of the .travis.yml itself. Some of our users have reported that restarting the affected build is helping in some cases. Our team continues investigating and working to resolve this issue.Jul 30, 10:27 UTC Investigating - We’re investigating reports of builds failing with the error: “There was an error in the .travis.yml file from which we could not recover”. Our team is working to find the root of this issue and we’ll post updates as soon as we have more information. Thank you for your patience.

Last Update: About 26 days ago

Sudo-required builds aren't starting on both travis-ci.com and travis-ci.org

Aug 4, 16:45 UTC Resolved - This incident has been resolved. We sincerely thank you for your patience.Aug 4, 16:21 UTC Monitoring - The backlog has cleared. We are monitoring the situation.Aug 4, 15:02 UTC Identified - We've identified the problem and are currently rolling out a fix.Aug 4, 12:06 UTC Investigating - We are seeing both private and open source builds routed to our sudo-enabled infrastructure (i.e. with `sudo: required`) that aren't starting at the moment. We are looking into it and we will provide an update as soon as possible. Thank you for your patience.

Last Update: About 26 days ago

Builds with `sudo: required` intermittently failing on travis-ci.com and travis-ci.org

Aug 10, 20:34 UTC Resolved - We're seeing normal rates of processing and job state finishes for our sudo required builds and marking this as resolved. If you see problems with your sudo builds, please contact support@travis-ci.com.Aug 10, 18:21 UTC Monitoring - We have identified the reason the network packets were getting dropped and would cause intermittent network issues. We've proceeded with fixing the situation. We are monitoring to see if things stay stable. Thanks for hanging in there with us.Aug 10, 17:10 UTC Identified - These issues seem to be stemming from network packets getting dropped in some parts of our sudo-enabled infrastructure. We are communicating with our infrastructure provider to get more insights.Aug 10, 15:19 UTC Investigating - We are currently receiving reports of private builds with `sudo: required` intermittently failing on travis-ci.com. The behaviors reported are that the builds are stalling while downloading something (e.g. the cache, a Docker image or Chrome) or because of SSL/TLS errors. Other behaviors are also possible. We are looking into it and we will post an update ASAP. Thank you for your patience!

Last Update: About 26 days ago

Builds with `sudo: required` intermittently failing on travis-ci.com and travis-ci.org

Aug 13, 13:51 UTC Resolved - Thanks for bearing with us! If you encounter any more issues, please reach out to support@travis-ci.com.Aug 13, 13:01 UTC Monitoring - We have mitigated the issue and are seeing error rates drop. We continue to monitor the situation.Aug 13, 11:02 UTC Update - We are continuing to work on a fix, but have no mentionable updates yet. Thank you for bearing with us.Aug 13, 09:11 UTC Identified - We have identified the cause of the networking issues and are working on bringing the systems back to normal.Aug 13, 07:47 UTC Update - We have taken measures to mitigate the slowness in build processing and start times and are continuing to investigate the networking errors.Aug 13, 03:07 UTC Update - We are continuing to investigate this issue.Aug 13, 01:39 UTC Investigating - We are unfortunately still seeing builds failing intermittently because of network related problems. Our team is looking into it. Apologies again for the inconvenience and we will post an update as soon as we know more.

Last Update: About 26 days ago

Builds with `sudo: required` intermittently failing on travis-ci.com and travis-ci.org

Aug 13, 23:55 UTC Resolved - We have mitigated the issue and are seeing error rates drop. We continue to monitor the situation.Aug 13, 21:17 UTC Investigating - After stabilizing network issues by re-provisioning some of the underlying infrastructure, we are seeing repeating symptoms of network instability. We are re-opening this and investigating further.

Last Update: About 26 days ago

Builds with `sudo: required` intermittently failing on travis-ci.com and travis-ci.org

Aug 15, 16:08 UTC Resolved - We are currently not seeing anything that impacts build processing, but will continue to monitor things closely. Please report any issues to support@travis-ci.com.Aug 15, 09:37 UTC Investigating - We are continuing to experience recurring networking issues for builds running in Google Cloud Engine. We are working hard to find the root cause of this and are deeply sorry for the inconvenience.

Last Update: About 26 days ago

Increased API latency for api.travis-ci.org

Sep 5, 18:58 UTC Resolved - We saw a degradation in API performance starting at around 16.07 UTC , correlating with a lot of incoming traffic to a specific endpoint. We put in place a block for that traffic at 18.33 UTC , after which API performance recovered. We are monitoring the situation.

Last Update: About 26 days ago

Unable to login to travis-ci.org

Sep 10, 20:11 UTC Resolved - The login issue has been resolved.Sep 10, 19:18 UTC Update - We are continuing to investigate this issue, and will provide updates as soon as we are able.Sep 10, 18:42 UTC Investigating - We are receiving reports of users being unable to login to travis-ci.org and are investigating.

Last Update: About 26 days ago

Build VMs not booting on the sudo-enabled infrastructure

Sep 10, 20:51 UTC Resolved - After patch was rolled out, sudo enabled builds are processing normally again. We have been monitoring for errors for the last hour or so, and we are marking this incident as resolved. Please contact support if you are experiencing problems with your sudo enabled builds.Sep 10, 18:55 UTC Update - We are in the last rounds of rollouts for the new patch, and waiting on the last of the VM clean ups before restarting the remainder of services. Sudo enabled builds are running at a reduced capacity. Thank you for your patience.Sep 10, 17:45 UTC Update - We have identified an error condition that is causing our job execution service to crash. We are in the process of rolling out a patch for this service, and cleaning up non-terminated job VMs that were leaked by the crash.Sep 10, 16:44 UTC Identified - The issue has been identified and a fix is being implemented.Sep 10, 16:40 UTC Update - We are seeing this behavior for open source builds as well.Sep 10, 16:28 UTC Investigating - We are receiving reports of VMs not booting for private builds on the sudo-enabled infrastructure (i.e. `sudo: required`). Our team is looking into it and we'll post updates as soon as we know more. Thank you for your patience!

Last Update: About 26 days ago

Sync and commit status updates

Sep 11, 09:35 UTC Resolved - This incident has been resolved.Sep 11, 02:16 UTC Monitoring - We believe the issue to be resolved but are continuing to monitor for any further developments.Sep 11, 00:00 UTC Investigating - We are currently investigating this issue.Sep 11, 00:00 UTC Monitoring - We are still investigating this issue, but you should now be able to login once more.Sep 10, 21:24 UTC Investigating - We are aware of an issue affecting sync between GitHub and Travis CI. You may experience issues with GitHub commit status updates and login during this time

Last Update: About 26 days ago

Delays in GitHub build statuses and login issues

Sep 13, 19:16 UTC Resolved - This incident has been resolved.Sep 13, 17:42 UTC Monitoring - We've scaled services to process build statuses to GitHub commits & PRs, and reworking some our platform functionality to prevent future login issues to travis-ci.com. Continuing to monitor the situation.Sep 13, 15:51 UTC Update - A side-effect of this incident is that you might not be able to login to our site (i.e. https://travis-ci.com). Sorry for the inconvenience.Sep 13, 15:34 UTC Identified - We have identified the source of status update delays and are working on a fix.Sep 13, 14:46 UTC Investigating - We're investigating delays posting build statuses to GitHub commits and Pull Requests after builds finish in Travis CI.

Last Update: About 26 days ago

Builds Creating New Tags Looping

Sep 18, 02:01 UTC Resolved - Given what we know about the impact of this problem, we have decided to downgrade this incident to a high priority bug. Please see this issue for further details about the identified problem and how we plan to address it: https://github.com/travis-ci/travis-ci/issues/10127Sep 17, 21:17 UTC Identified - We have identified that this issue is occurring under specific conditions when a TRAVIS_TAG value is dynamically defined.Sep 17, 18:46 UTC Investigating - We're currently investigating reports of builds which create and push new tags to Github starting new builds in an infinite loop. At the moment, this only appears to be affecting builds on travis-ci.com.

Last Update: About 26 days ago

Job start delays

Sep 18, 19:54 UTC Resolved - This incident has been resolved.Sep 18, 18:25 UTC Investigating - We are investigating reports of delays in jobs starting.

Last Update: About 26 days ago

Network slowness/timeouts for sudo-enabled builds

Sep 20, 18:40 UTC Resolved - This incident has been resolved.Sep 20, 17:01 UTC Monitoring - Our upgrade is complete and things are stabilizing. The NATs are no longer saturated. We continue to monitor traffic levels.Sep 20, 16:30 UTC Update - We are in the process of increasing the size of our NAT instances. We expect some minor network disruptions as routes are updated.Sep 20, 15:28 UTC Identified - The issue has been identified and it's caused by the saturation of the NAT of the our sudo-enabled infrastructure. We are currently working on fixing that.Sep 20, 15:15 UTC Investigating - We are receiving reports of builds with `sudo: required` experiencing slowness while executing commands accessing the network e.g. `docker pull`. We are currently looking into it and we'll keep you updated. Thank you for your patience!

Last Update: About 26 days ago

https://travis-ci.com is unreachable

Sep 26, 20:43 UTC Resolved - The issue has been resolved.Sep 26, 20:34 UTC Identified - DNS resolution is unreliable at times. We are working with our service provider to resolve the issue. Other services remain available.Sep 26, 20:19 UTC Investigating - We are investigating the reports of connectivity issues to https://travis-ci.com.

Last Update: About 26 days ago

Networking errors OSX builds

Sep 27, 12:43 UTC Resolved - We have seen errors between 10:40 and 11:25 UTC for OSX jobs. One of the ways this manifested, is this error in the log: An error occurred while generating the build script.Please restart any job whose result you are interested in.Thanks for understanding.

Last Update: About 26 days ago

Slow APT performance for sudo-enabled builds

Oct 9, 11:57 UTC Resolved - Network performance should be back to normal for all sudo-enabled builds. Thank you for your patience.Oct 9, 10:36 UTC Identified - We are seeing slow network performance during the APT phase for sudo-enabled builds. We are in the process of switching to a different APT mirror.

Last Update: About 26 days ago

Slowness and/or timeout while downloading APT packages on the sudo-enabled infrastructure

Oct 9, 20:23 UTC Resolved - This incident has been resolved.Oct 9, 14:44 UTC Update - We are continuing to monitor for any further issues.Oct 9, 14:43 UTC Monitoring - Network connections to the apt repositories are fast and stable again. We continue to monitor the situation.Oct 9, 13:42 UTC Investigating - We are still seeing builds slowed down and/or timing out when downloading APT packages via `apt-get` on the sudo-enabled infrastructure (i.e. with `sudo: required`). We are looking into it and we will give updates here as soon as we know more. Thank you for your patience!

Last Update: About 26 days ago

Some `sudo: required` jobs are stuck in an infinite build loop and never starts

Oct 9, 21:39 UTC Resolved - The issue has been resolved.Oct 9, 17:13 UTC Identified - We are seeing jobs created between 16:03 UTC and 16:50 with `sudo: required` are stuck in an infinite build loop and never starts. These jobs are currently incorrectly configured and will not be able to run.This issue affects both .org and .com builds.For your affected jobs, push a new commit to the branch involved, or close and reopen a Pull Request, in order to configure a new build.

Last Update: About 26 days ago

Intermittent Network Connectivity Issue for GCE (sudo: required) Jobs

Oct 12, 01:41 UTC Resolved - We have confirmed that network connectivity is normal. Thank you.Oct 12, 00:12 UTC Investigating - We are investigating reports of unreliable network connectivity for jobs on GCE (`sudo: required`).

Last Update: About 26 days ago

GCE Build Processing Outage

Jan 25, 21:27 UTC Resolved - Google has reported that this issue should be resolved and GCE builds should process normally.Jan 25, 21:20 UTC Monitoring - Google reports that this issue should be resolved for the majority of users. We will continue to monitor our infrastructure as builds come back online until we receive word that this issue is resolved.Jan 25, 20:37 UTC Identified - Google Cloud is currently experiencing an outage affecting 100% of users, including build VMs. We are monitoring the situation and will update as soon as Google Cloud is back online. You can check Google's status page for this incident here: https://status.cloud.google.com/incident/cloud-networking/18004

Last Update: A few months ago

GCE Build Processing Outage

Jan 25, 21:20 UTC Monitoring - Google reports that this issue should be resolved for the majority of users. We will continue to monitor our infrastructure as builds come back online until we receive word that this issue is resolved.Jan 25, 20:37 UTC Identified - Google Cloud is currently experiencing an outage affecting 100% of users, including build VMs. We are monitoring the situation and will update as soon as Google Cloud is back online. You can check Google's status page for this incident here: https://status.cloud.google.com/incident/cloud-networking/18004

Last Update: A few months ago

GCE Build Processing Outage

Jan 25, 20:37 UTC Identified - Google Cloud is currently experiencing an outage affecting 100% of users, including build VMs. We are monitoring the situation and will update as soon as Google Cloud is back online. You can check Google's status page for this incident here: https://status.cloud.google.com/incident/cloud-networking/18004

Last Update: A few months ago

macOS emergency maintenance

Jan 24, 04:23 UTC Resolved - We have postponed the maintenance to a later time.Jan 24, 00:06 UTC Identified - Due to the high amounts of requeues and errors being reported, we are bringing down parts of our macOS infrastructure for emergency maintenance starting at 8:30 PM EST with the intent to further stabilize macOS services. During this time, you may experience delays to your builds.

Last Update: A few months ago

macOS emergency maintenance

Jan 24, 00:06 UTC Identified - Due to the high amounts of requeues and errors being reported, we are bringing down parts of our macOS infrastructure for emergency maintenance starting at 8:30 PM EST with the intent to further stabilize macOS services. During this time, you may experience delays to your builds.

Last Update: A few months ago

Degraded performance for all Mac builds

Jan 19, 15:27 UTC Resolved - This incident has been resolved.Jan 19, 14:52 UTC Monitoring - We've spent the last 24 hours doing maintenance on our Mac infrastructure with the help of our provider. Things now look stable on our end albeit delays might be higher due to increased demand. We are thanking you again for your continued patience.Jan 18, 21:05 UTC Identified - We continue to mitigate build delays for Mac builds.Jan 18, 14:44 UTC Update - To help with the current situation, we are taking the hard decision of canceling Mac build jobs older than 2018-01-18 00:00 UTC . We apologize for this disruption and are thanking you for your understanding.Jan 18, 14:38 UTC Investigating - We are sorry to inform you that you'll likely experience delays with your Mac builds (both private and open source) until further notice. Our team is currently addressing this. We are sorry for the inconvenience in the meantime and we are thanking you for your patience.

Last Update: A few months ago

Degraded performance for all Mac builds

Jan 19, 14:52 UTC Monitoring - We've spent the last 24 hours doing maintenance on our Mac infrastructure with the help of our provider. Things now look stable on our end albeit delays might be higher due to increased demand. We are thanking you again for your continued patience.Jan 18, 21:05 UTC Identified - We continue to mitigate build delays for Mac builds.Jan 18, 14:44 UTC Update - To help with the current situation, we are taking the hard decision of canceling Mac build jobs older than 2018-01-18 00:00 UTC . We apologize for this disruption and are thanking you for your understanding.Jan 18, 14:38 UTC Investigating - We are sorry to inform you that you'll likely experience delays with your Mac builds (both private and open source) until further notice. Our team is currently addressing this. We are sorry for the inconvenience in the meantime and we are thanking you for your patience.

Last Update: A few months ago

Degraded performance for all Mac builds

Jan 18, 21:05 UTC Identified - We continue to mitigate build delays for Mac builds.Jan 18, 14:44 UTC Update - To help with the current situation, we are taking the hard decision of canceling Mac build jobs older than 2018-01-18 00:00 UTC . We apologize for this disruption and are thanking you for your understanding.Jan 18, 14:38 UTC Investigating - We are sorry to inform you that you'll likely experience delays with your Mac builds (both private and open source) until further notice. Our team is currently addressing this. We are sorry for the inconvenience in the meantime and we are thanking you for your patience.

Last Update: A few months ago

Degraded performance for all Mac builds

Jan 18, 14:44 UTC Update - To help with the current situation, we are taking the hard decision of canceling Mac build jobs older than 2018-01-18 00:00 UTC . We apologize for this disruption and are thanking you for your understanding.Jan 18, 14:38 UTC Investigating - We are sorry to inform you that you'll likely experience delays with your Mac builds (both private and open source) until further notice. Our team is currently addressing this. We are sorry for the inconvenience in the meantime and we are thanking you for your patience.

Last Update: A few months ago

Degraded performance for all Mac builds

Jan 18, 14:38 UTC Investigating - We are sorry to inform you that you'll likely experience delays with your Mac builds (both private and open source) until further notice. Our team is currently addressing this. We are sorry for the inconvenience in the meantime and we are thanking you for your patience.

Last Update: A few months ago

Build delays in both container-based and macOS open source builds.

Jan 17, 20:41 UTC Resolved - Builds are starting and running normally at the moment hence, we are closing this incident. Thank you again for hanging in there with us and please reach out to support@travis-ci.com if you run into something.Jan 17, 19:25 UTC Monitoring - We are now seeing open source builds on macOS and container-based Linux starting again. Please restart any builds that are currently "hanging". We will cancel any remaining stalled jobs later today. Thank you for your patience and we are continuing to monitor the situation.Jan 17, 17:41 UTC Investigating - We’re currently investigating delays starting open source container-based and macOS builds.

Last Update: A few months ago

Build delays in both container-based and macOS open source builds.

Jan 17, 20:41 UTC Resolved - Builds are starting and running normally at the moment hence, we are closing this incident. Thank you again for hanging in there with us and please reach to support@travis-ci.com if you run into something.Jan 17, 19:25 UTC Monitoring - We are now seeing open source builds on macOS and container-based Linux starting again. Please restart any builds that are currently "hanging". We will cancel any remaining stalled jobs later today. Thank you for your patience and we are continuing to monitor the situation.Jan 17, 17:41 UTC Investigating - We’re currently investigating delays starting open source container-based and macOS builds.

Last Update: A few months ago

Build delays in both container-based and macOS open source builds.

Jan 17, 19:25 UTC Monitoring - We are now seeing open source builds on macOS and container-based Linux starting again. Please restart any builds that are currently "hanging". We will cancel any remaining stalled jobs later today. Thank you for your patience and we are continuing to monitor the situation.Jan 17, 17:41 UTC Investigating - We’re currently investigating delays starting open source container-based and macOS builds.

Last Update: A few months ago

Build delays in both container-based and macOS open source builds.

Jan 17, 17:41 UTC Investigating - We’re currently investigating delays starting open source container-based and macOS builds.

Last Update: A few months ago

Linux and macOS open source builds (travis-ci.org) backlog

Jan 17, 11:18 UTC Resolved - Builds have stabilized on all infrastructures, and we will continue to monitor the situation. If you’re still experiencing any problems, please get in touch via email: support@travis-ci.comJan 17, 09:55 UTC Monitoring - We have identified the cause of the problem to be our RabbitMQ cluster which needed to be upgraded and the queues purged. We are now back to processing jobs on all infrastructures and closely monitoring the situation. As a clean-up effort, we have to cancel all jobs that were previously stuck. Thank you for your patience and understanding.Jan 17, 05:17 UTC Update - We continue working on the build back log issue. We are canceling some builds to help our recovery efforts. Thank you for your continued patience and understanding.Jan 17, 04:27 UTC Update - We are resuming open source builds on the macOS and Linux container-based (i.e. sudo: false) infrastructures.Jan 17, 04:09 UTC Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Linux and macOS open source builds (travis-ci.org) backlog

Jan 17, 09:55 UTC Monitoring - We have identified the cause of the problem to be our RabbitMQ cluster which needed to be upgraded and the queues purged. We are now back to processing jobs on all infrastructures and closely monitoring the situation. As a clean-up effort, we have to cancel all jobs that were previously stuck. Thank you for your patience and understanding.Jan 17, 05:17 UTC Update - We continue working on the build back log issue. We are canceling some builds to help our recovery efforts. Thank you for your continued patience and understanding.Jan 17, 04:27 UTC Update - We are resuming open source builds on the macOS and Linux container-based (i.e. sudo: false) infrastructures.Jan 17, 04:09 UTC Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Linux and macOS open source builds (travis-ci.org) backlog

Jan 17, 05:17 UTC Update - We continue working on the build back log issue. We are canceling some builds to help our recovery efforts. Thank you for your continued patience and understanding.Jan 17, 04:27 UTC Update - We are resuming open source builds on the macOS and Linux container-based (i.e. sudo: false) infrastructures.Jan 17, 04:09 UTC Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Linux and macOS open source builds (travis-ci.org) backlog

Jan 17, 04:27 UTC Update - We are resuming open source builds on the macOS and Linux container-based (i.e. sudo: false) infrastructures.Jan 17, 04:09 UTC Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Linux and macOS open source builds (travis-ci.org) backlog

Jan 17, 04:09 UTC Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

All Linux and macOS open source builds (travis-ci.org) are stopped

Jan 17, 04:09 UTC Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

All Linux and macOS open source builds (travis-ci.org) are stopped

Jan 17, 02:25 UTC Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

All Linux and macOS open source builds (travis-ci.org) are stopped

Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Back log on Linux infrastructure

Jan 17, 00:26 UTC Update - We are still investigating the job backlog incident.Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Back log on Linux infrastructure

Jan 16, 21:17 UTC Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Back log on Linux infrastructure

Jan 16, 19:18 UTC Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Back log on Linux infrastructure

Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Container-Based (sudo: false) back log on .org

Jan 16, 18:46 UTC Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Container-Based (sudo: false) back log on .org

Jan 16, 18:10 UTC Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.

Last Update: A few months ago

Build start and logs delayed for open-source builds

Jan 16, 15:37 UTC Resolved - Systems are operating normally.Jan 16, 15:28 UTC Monitoring - Logs delivery has fully recovered. We are monitoring the situation.Jan 16, 15:26 UTC Identified - An influx of builds clogged some pipes. Build backlog has recovered, log delivery is starting to recover.Jan 16, 15:13 UTC Investigating - We are investigating elevated wait times for new builds and log delivery.

Last Update: A few months ago

Build start and logs delayed for open-source builds

Jan 16, 15:28 UTC Monitoring - Logs delivery has fully recovered. We are monitoring the situation.Jan 16, 15:26 UTC Identified - An influx of builds clogged some pipes. Build backlog has recovered, log delivery is starting to recover.Jan 16, 15:13 UTC Investigating - We are investigating elevated wait times for new builds and log delivery.

Last Update: A few months ago

Build start and logs delayed for open-source builds

Jan 16, 15:26 UTC Identified - An influx of builds clogged some pipes. Build backlog has recovered, log delivery is starting to recover.Jan 16, 15:13 UTC Investigating - We are investigating elevated wait times for new builds and log delivery.

Last Update: A few months ago

Build start and logs delayed for open-source builds

Jan 16, 15:13 UTC Investigating - We are investigating elevated wait times for new builds and log delivery.

Last Update: A few months ago

Reduced macOS capacity

Jan 12, 15:29 UTC Resolved - macOS builds have stabilised. We will continue to monitor the situation.Jan 12, 09:55 UTC Investigating - macOS builds are currently running at reduced capacity. We continue to work on a solution.

Last Update: A few months ago

Reduced macOS capacity

Jan 12, 09:55 UTC Investigating - macOS builds are currently running at reduced capacity. We continue to work on a solution.

Last Update: A few months ago

Service availability

Jan 12, 09:56 UTC Resolved - We have fully recovered from yesterday’s GitHub outage. macOS builds are still operating at reduced capacity and we continue to work on a solution, which you can follow here: https://www.traviscistatus.com/incidents/6xb4kjczh4k6Jan 11, 18:40 UTC Update - For the time being, we are turning our focus to fixing unstable services in our macOS infrastructure. During this time, Mac jobs will continue to run but at a reduced capacity.Jan 11, 17:16 UTC Investigating - Following GitHub's outage earlier today, we are currently working on getting our system back on its feet. In the meantime, synchronizing with GitHub may not work and our platform may have delays in processing all queues. Sorry for the inconvenience and we will update as we know more.

Last Update: A few months ago

Service availability

Jan 11, 18:40 UTC Update - For the time being, we are turning our focus to fixing unstable services in our macOS infrastructure. During this time, Mac jobs will continue to run but at a reduced capacity.Jan 11, 17:16 UTC Investigating - Following GitHub's outage earlier today, we are currently working on getting our system back on its feet. In the meantime, synchronizing with GitHub may not work and our platform may have delays in processing all queues. Sorry for the inconvenience and we will update as we know more.

Last Update: A few months ago

Service availability

Jan 11, 18:40 UTC Update - For the time being, we are turning our focus to fixing unstable services in our macOS infrastructure. During this time, jobs will continue to run but at a reduced capacity.Jan 11, 17:16 UTC Investigating - Following GitHub's outage earlier today, we are currently working on getting our system back on its feet. In the meantime, synchronizing with GitHub may not work and our platform may have delays in processing all queues. Sorry for the inconvenience and we will update as we know more.

Last Update: A few months ago

Service availability

Jan 11, 17:16 UTC Investigating - Following GitHub's outage earlier today, we are currently working on getting our system back on its feet. In the meantime, synchronizing with GitHub may not work and our platform may have delays in processing all queues. Sorry for the inconvenience and we will update as we know more.

Last Update: A few months ago

apt-get failures due to expired GPG key

Jan 11, 13:26 UTC Resolved - The issue has been resolved.Jan 11, 12:29 UTC Monitoring - A hot fix has been deployed to update the expired GPG key and `apt-get update` commands should now succeed. If you're still experiencing issues, please let us know through https://github.com/travis-ci/travis-ci/issues/9037 or support@travis-ci.com.Jan 11, 12:14 UTC Identified - `apt-get` commands during build time are failing due to an expired GPG key from the mongo repository and we are working on a hotfix to update the key. In the meantime, it is possible to workaround the errors by manually updating the key in your .travis.yml as recommended at: https://github.com/travis-ci/travis-ci/issues/9037#issuecomment-356914965

Last Update: A few months ago

apt-get failures due to expired GPG key

Jan 11, 12:29 UTC Monitoring - A hot fix has been deployed to update the expired GPG key and `apt-get update` commands should now succeed. If you're still experiencing issues, please let us know through https://github.com/travis-ci/travis-ci/issues/9037 or support@travis-ci.com.Jan 11, 12:14 UTC Identified - `apt-get` commands during build time are failing due to an expired GPG key from the mongo repository and we are working on a hotfix to update the key. In the meantime, it is possible to workaround the errors by manually updating the key in your .travis.yml as recommended at: https://github.com/travis-ci/travis-ci/issues/9037#issuecomment-356914965

Last Update: A few months ago

apt-get failures due to expired GPG key

Jan 11, 12:14 UTC Identified - `apt-get` commands during build time are failing due to an expired GPG key from the mongo repository and we are working on a hotfix to update the key. In the meantime, it is possible to workaround the errors by manually updating the key in your .travis.yml as recommended at: https://github.com/travis-ci/travis-ci/issues/9037#issuecomment-356914965

Last Update: A few months ago

Container-based Linux NAT replacement

Jan 10, 05:44 UTC Completed - The scheduled maintenance has been completed.Jan 10, 05:36 UTC Verifying - Verification is currently underway for the maintenance items.Jan 10, 05:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 23:57 UTC Scheduled - We have been receiving reports of intermittent network issues over the past week. A newly-provisioned NAT using the latest Amazon NAT AMI has not exhibited any of the problem behaviors. Based on this information, we have decided that the NAT hosts in our container-based Linux infrastructure are in need of a refresh. This maintenance will disrupt running and queued jobs for container-based Linux, and may introduce changes to the IP addresses used for internet connectivity.

Last Update: A few months ago

Networking errors for container-based Linux

Jan 10, 05:45 UTC Resolved - This incident has been resolved.Jan 9, 23:46 UTC Identified - Based on the data we have gathered, we are planning to re-provision all NAT hosts for this infrastructure, which will happen within the next 12 hours during a time of decreased demand. More details will be posted in a separate scheduled maintenance.Jan 9, 20:19 UTC Update - We believe something is misbehaving in our NAT layer, and we are running more tests to determine next steps.Jan 9, 16:07 UTC Investigating - We are investigating increased networking errors on container-based Linux jobs.

Last Update: A few months ago

Container-based Linux NAT replacement

Jan 10, 05:36 UTC Verifying - Verification is currently underway for the maintenance items.Jan 10, 05:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 23:57 UTC Scheduled - We have been receiving reports of intermittent network issues over the past week. A newly-provisioned NAT using the latest Amazon NAT AMI has not exhibited any of the problem behaviors. Based on this information, we have decided that the NAT hosts in our container-based Linux infrastructure are in need of a refresh. This maintenance will disrupt running and queued jobs for container-based Linux, and may introduce changes to the IP addresses used for internet connectivity.

Last Update: A few months ago

Container-based Linux NAT replacement

Jan 10, 05:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jan 9, 23:57 UTC Scheduled - We have been receiving reports of intermittent network issues over the past week. A newly-provisioned NAT using the latest Amazon NAT AMI has not exhibited any of the problem behaviors. Based on this information, we have decided that the NAT hosts in our container-based Linux infrastructure are in need of a refresh. This maintenance will disrupt running and queued jobs for container-based Linux, and may introduce changes to the IP addresses used for internet connectivity.

Last Update: A few months ago

Container-based Linux NAT replacement

Jan 9, 23:57 UTC Scheduled - We have been receiving reports of intermittent network issues over the past week. A newly-provisioned NAT using the latest Amazon NAT AMI has not exhibited any of the problem behaviors. Based on this information, we have decided that the NAT hosts in our container-based Linux infrastructure are in need of a refresh. This maintenance will disrupt running and queued jobs for container-based Linux, and may introduce changes to the IP addresses used for internet connectivity.

Last Update: A few months ago

Networking errors for container-based Linux

Jan 9, 23:46 UTC Identified - Based on the data we have gathered, we are planning to re-provision all NAT hosts for this infrastructure, which will happen within the next 12 hours during a time of decreased demand. More details will be posted in a separate scheduled maintenance.Jan 9, 20:19 UTC Update - We believe something is misbehaving in our NAT layer, and we are running more tests to determine next steps.Jan 9, 16:07 UTC Investigating - We are investigating increased networking errors on container-based Linux jobs.

Last Update: A few months ago

Networking errors for container-based Linux

Jan 9, 20:19 UTC Update - We believe something is misbehaving in our NAT layer, and we are running more tests to determine next steps.Jan 9, 16:07 UTC Investigating - We are investigating increased networking errors on container-based Linux jobs.

Last Update: A few months ago

Networking errors for container-based Linux

Jan 9, 16:07 UTC Investigating - We are investigating increased networking errors on container-based Linux jobs.

Last Update: A few months ago

Emergency Maintenance for container-based Linux

Jan 9, 14:59 UTC Resolved - This incident has been resolved.Jan 9, 11:25 UTC Update - We are beginning to shift all container-based linux jobs back on to our ec2 infrastructure. We continue to monitor the situation closely.Jan 8, 23:04 UTC Monitoring - We are monitoring as we begin routing 10% of jobs through the updated infrastructure.Jan 8, 20:49 UTC Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.

Last Update: A few months ago

Emergency Maintenance for container-based Linux

Jan 9, 11:25 UTC Update - We are beginning to shift all container-based linux jobs back on to our ec2 infrastructure. We continue to monitor the situation closely.Jan 8, 23:04 UTC Monitoring - We are monitoring as we begin routing 10% of jobs through the updated infrastructure.Jan 8, 20:49 UTC Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.

Last Update: A few months ago

Emergency Maintenance for container-based Linux

Jan 8, 23:04 UTC Monitoring - We are monitoring as we begin routing 10% of jobs through the updated infrastructure.Jan 8, 20:49 UTC Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.

Last Update: A few months ago

Emergency Maintenance for container-based Linux

Jan 8, 20:49 UTC Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.

Last Update: A few months ago

Emergency Maintenance for container-based Linux

Jan 8, 20:49 UTC Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.

Last Update: A few months ago

Container-based Linux over capacity

Jan 3, 03:16 UTC Resolved - This incident has been resolved.Jan 3, 02:42 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 2, 23:35 UTC Update - We are continuing to investigate this issue. Thank you for your patience!Jan 2, 22:19 UTC Investigating - Container-based Linux for public repositories is currently operating over capacity. We are investigating why the auto-scaling capacity is not keeping up with demand.

Last Update: A few months ago

Container-based Linux over capacity

Jan 3, 02:42 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 2, 23:35 UTC Update - We are continuing to investigate this issue. Thank you for your patience!Jan 2, 22:19 UTC Investigating - Container-based Linux for public repositories is currently operating over capacity. We are investigating why the auto-scaling capacity is not keeping up with demand.

Last Update: A few months ago

Container-based Linux over capacity

Jan 2, 23:35 UTC Update - We are continuing to investigate this issue. Thank you for your patience!Jan 2, 22:19 UTC Investigating - Container-based Linux for public repositories is currently operating over capacity. We are investigating why the auto-scaling capacity is not keeping up with demand.

Last Update: A few months ago

Container-based Linux over capacity

Jan 2, 23:35 UTC Update - We are continuing to investigate this issue. Thank you for your patience!Jan 2, 22:19 UTC Investigating - Container-based Linux for public repositories is currently operating over capacity. We are investigating why the auto-scaling capacity is not keeping up with demand.

Last Update: A few months ago

Container-based Linux over capacity

Jan 2, 22:19 UTC Investigating - Container-based Linux for public repositories is currently operating over capacity. We are investigating why the auto-scaling capacity is not keeping up with demand.

Last Update: A few months ago

Errors on sudo: required and sudo: false infrastructures

Jan 2, 04:28 UTC Resolved - The build capacity is back to normal.Jan 2, 01:43 UTC Identified - We have identified the source of large build back logs. We are working to eradicate the issue.Jan 1, 20:40 UTC Update - We continue the investigation of the build issues.Jan 1, 19:34 UTC Investigating - We are seeing high numbers of requeues for GCE-based sudo: required jobs as well as high numbers of errors for EC2-based sudo: false jobs. We are looking into it.

Last Update: A few months ago

Errors on sudo: required and sudo: false infrastructures

Jan 2, 01:43 UTC Identified - We have identified the source of large build back logs. We are working to eradicate the issue.Jan 1, 20:40 UTC Update - We continue the investigation of the build issues.Jan 1, 19:34 UTC Investigating - We are seeing high numbers of requeues for GCE-based sudo: required jobs as well as high numbers of errors for EC2-based sudo: false jobs. We are looking into it.

Last Update: A few months ago

Errors on sudo: required and sudo: false infrastructures

Jan 1, 20:40 UTC Update - We continue the investigation of the build issues.Jan 1, 19:34 UTC Investigating - We are seeing high numbers of requeues for GCE-based sudo: required jobs as well as high numbers of errors for EC2-based sudo: false jobs. We are looking into it.

Last Update: A few months ago

Errors on sudo: required and sudo: false infrastructures

Jan 1, 19:34 UTC Investigating - We are seeing high numbers of requeues for GCE-based sudo: required jobs as well as high numbers of errors for EC2-based sudo: false jobs. We are looking into it.

Last Update: A few months ago

Reduced capacity macOS

Dec 13, 06:06 UTC Resolved - Capacity is back up.Dec 13, 05:26 UTC Investigating - We are investigating a drop in capacity for macOS builds.

Last Update: A few months ago

Reduced capacity macOS

Dec 13, 05:26 UTC Investigating - We are investigating a drop in capacity for macOS builds.

Last Update: A few months ago

Decreased capacity macOS builds

Dec 11, 10:15 UTC Resolved - macOS capacity has been restored, and the backlog is processed with full speed.Dec 11, 08:38 UTC Investigating - We are investigating a reduction of the build capacity for macOS builds.

Last Update: A few months ago

Decreased capacity macOS builds

Dec 11, 08:38 UTC Investigating - We are investigating a reduction of the build capacity for macOS builds.

Last Update: A few months ago

Decreased capacity macOS builds

Dec 11, 08:38 UTC Investigating - We are investigating a reduction of the build capacity for macOS builds.

Last Update: A few months ago

Reduced Mac capacity due to emergency maintenance

Dec 7, 16:11 UTC Resolved - Maintenance has completed, builds are running at full capacity again. Thanks for your patience.Dec 7, 15:44 UTC Investigating - Due to high amount of requeues in our mac infrastructure, we are going to reduce capacity while we do an investigation. Builds are still running, but at reduced capacity.

Last Update: A few months ago

Reduced Mac capacity due to emergency maintenance

Dec 7, 15:44 UTC Investigating - Due to high amount of requeues in our mac infrastructure, we are going to reduce capacity while we do an investigation. Builds are still running, but at reduced capacity.

Last Update: A few months ago

Reduced Mac capacity due to emergency maintenance

Dec 7, 15:44 UTC Investigating - Due to high amount of requeues in our mac infrastructure, we are going to reduce capacity while we do an investigation. Builds are still running, but at reduced capacity.

Last Update: A few months ago

macOS maintenance required

Dec 2, 02:04 UTC Resolved - We have resolved the issue with the orphaned VMs, and macOS builds are now back to normal. We thank you for your patience during this time.Dec 1, 23:39 UTC Update - We are no longer seeing delays on our .com infrastructure, but some still remain on .org. We are seeing positive performance changes, however, and will continue to work towards fully resolving it.Dec 1, 22:40 UTC Update - The process to clean up VMs is still ongoing, but is taking longer than expected. We will provide another update as soon as we have anything new to share. We thank you for your patience during this time.Dec 1, 21:50 UTC Update - We are still in the process of cleaning up orphaned VMs, the process is ongoing but slow. Due to this, we expect ongoing reduced capacity. We will provide an update in the next hour or so.Dec 1, 20:40 UTC Investigating - We are bringing down a part of our macOS infrastructure to perform unplanned maintenance. We will be clearing out VMs that are unintentionally left running and restarting them. This will help increase performance for all users. During this time, you may notice degraded macOS performance. We will provide an update in approximately one hour on the status of this maintenance.

Last Update: A few months ago

macOS maintenance required

Dec 1, 23:39 UTC Update - We are no longer seeing delays on our .com infrastructure, but some still remain on .org. We are seeing positive performance changes, however, and will continue to work towards fully resolving it.Dec 1, 22:40 UTC Update - The process to clean up VMs is still ongoing, but is taking longer than expected. We will provide another update as soon as we have anything new to share. We thank you for your patience during this time.Dec 1, 21:50 UTC Update - We are still in the process of cleaning up orphaned VMs, the process is ongoing but slow. Due to this, we expect ongoing reduced capacity. We will provide an update in the next hour or so.Dec 1, 20:40 UTC Investigating - We are bringing down a part of our macOS infrastructure to perform unplanned maintenance. We will be clearing out VMs that are unintentionally left running and restarting them. This will help increase performance for all users. During this time, you may notice degraded macOS performance. We will provide an update in approximately one hour on the status of this maintenance.

Last Update: A few months ago

macOS maintenance required

Dec 1, 22:40 UTC Update - The process to clean up VMs is still ongoing, but is taking longer than expected. We will provide another update as soon as we have anything new to share. We thank you for your patience during this time.Dec 1, 21:50 UTC Update - We are still in the process of cleaning up orphaned VMs, the process is ongoing but slow. Due to this, we expect ongoing reduced capacity. We will provide an update in the next hour or so.Dec 1, 20:40 UTC Investigating - We are bringing down a part of our macOS infrastructure to perform unplanned maintenance. We will be clearing out VMs that are unintentionally left running and restarting them. This will help increase performance for all users. During this time, you may notice degraded macOS performance. We will provide an update in approximately one hour on the status of this maintenance.

Last Update: A few months ago

macOS maintenance required

Dec 1, 21:50 UTC Update - We are still in the process of cleaning up orphaned VMs, the process is ongoing but slow. Due to this, we expect ongoing reduced capacity. We will provide an update in the next hour or so.Dec 1, 20:40 UTC Investigating - We are bringing down a part of our macOS infrastructure to perform unplanned maintenance. We will be clearing out VMs that are unintentionally left running and restarting them. This will help increase performance for all users. During this time, you may notice degraded macOS performance. We will provide an update in approximately one hour on the status of this maintenance.

Last Update: A few months ago

macOS maintenance required

Dec 1, 21:50 UTC Update - We are still in the process of cleaning up orphaned VMs, the process is ongoing but slow. Due to this, we expect ongoing reduced capacity. We will provide an update in the next hour or so.Dec 1, 20:40 UTC Investigating - We are bringing down a part of our macOS infrastructure to perform unplanned maintenance. We will be clearing out VMs that are unintentionally left running and restarting them. This will help increase performance for all users. During this time, you may notice degraded macOS performance. We will provide an update in approximately one hour on the status of this maintenance.

Last Update: A few months ago

macOS maintenance required

Dec 1, 20:40 UTC Investigating - We are bringing down a part of our macOS infrastructure to perform unplanned maintenance. We will be clearing out VMs that are unintentionally left running and restarting them. This will help increase performance for all users. During this time, you may notice degraded macOS performance. We will provide an update in approximately one hour on the status of this maintenance.

Last Update: A few months ago

Chrome Builds Failing

Nov 30, 19:33 UTC Resolved - Workarounds and further updates may be found in https://github.com/travis-ci/travis-ci/issues/8836Nov 30, 16:21 UTC Identified - We are currently seeing builds using Chrome failing because of a spurious permission change. Workaround and updates can be found here. Thank you for your patience!

Last Update: A few months ago

Chrome Builds Failing

Nov 30, 16:21 UTC Identified - We are currently seeing builds using Chrome failing because of a spurious permission change. Workaround and updates can be found here. Thank you for your patience!

Last Update: A few months ago

Network connectivity issues for public and private builds

Nov 30, 13:41 UTC Resolved - This incident has been resolved.Nov 30, 11:03 UTC Monitoring - The network connectivity errors seem to be resolved as of, approximately, 10:25 UTC time. We continue monitoring the situation and we recommend restarting any affected builds and contact us at support@travis-ci.com if these are still failing because of download errors.Nov 30, 10:27 UTC Investigating - We’re currently investigating reports of network reachability issues that are causing some downloads to fail during build time. These reachability issues seem to be affecting downloads from Launchpad, Nodejs.org, Gradle or Jitpack. We will provide more updates as soon we get them. Thanks for your understanding.

Last Update: A few months ago

Network connectivity issues for public and private builds

Nov 30, 11:03 UTC Monitoring - The network connectivity errors seem to be resolved as of, approximately, 10:25 UTC time. We continue monitoring the situation and we recommend restarting any affected builds and contact us at support@travis-ci.com if these are still failing because of download errors.Nov 30, 10:27 UTC Investigating - We’re currently investigating reports of network reachability issues that are causing some downloads to fail during build time. These reachability issues seem to be affecting downloads from Launchpad, Nodejs.org, Gradle or Jitpack. We will provide more updates as soon we get them. Thanks for your understanding.

Last Update: A few months ago

Network connectivity issues for public and private builds

Nov 30, 10:27 UTC Investigating - We’re currently investigating reports of network reachability issues that are causing some downloads to fail during build time. These reachability issues seem to be affecting downloads from Launchpad, Nodejs.org, Gradle or Jitpack. We will provide more updates as soon we get them. Thanks for your understanding.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 28, 06:30 UTC Resolved - This incident has been resolved.Nov 28, 05:17 UTC Update - Container-based Linux capacity for private repositories is currently offline, with fresh capacity on the way within 20m.Nov 28, 04:14 UTC Update - We are beginning to roll out configuration changes, which may result in delays.Nov 27, 23:35 UTC Monitoring - Capacity is back up, but we are planning to keep this incident open while we continue to monitor the NATs.Nov 27, 23:02 UTC Identified - We are seeing reduced capacity due to NAT stability issues, specifically for container-based Linux public repositories. We are in the process of preparing some network configuration and alerting changes that we expect will dramatically reduce the likelihood of this problem occurring again.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 28, 05:17 UTC Update - Container-based Linux capacity for private repositories is currently offline, with fresh capacity on the way within 20m.Nov 28, 04:14 UTC Update - We are beginning to roll out configuration changes, which may result in delays.Nov 27, 23:35 UTC Monitoring - Capacity is back up, but we are planning to keep this incident open while we continue to monitor the NATs.Nov 27, 23:02 UTC Identified - We are seeing reduced capacity due to NAT stability issues, specifically for container-based Linux public repositories. We are in the process of preparing some network configuration and alerting changes that we expect will dramatically reduce the likelihood of this problem occurring again.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 28, 04:14 UTC Update - We are beginning to roll out configuration changes, which may result in delays.Nov 27, 23:35 UTC Monitoring - Capacity is back up, but we are planning to keep this incident open while we continue to monitor the NATs.Nov 27, 23:02 UTC Identified - We are seeing reduced capacity due to NAT stability issues, specifically for container-based Linux public repositories. We are in the process of preparing some network configuration and alerting changes that we expect will dramatically reduce the likelihood of this problem occurring again.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 27, 23:35 UTC Monitoring - Capacity is back up, but we are planning to keep this incident open while we continue to monitor the NATs.Nov 27, 23:02 UTC Identified - We are seeing reduced capacity due to NAT stability issues, specifically for container-based Linux public repositories. We are in the process of preparing some network configuration and alerting changes that we expect will dramatically reduce the likelihood of this problem occurring again.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 27, 23:35 UTC Monitoring - Capacity is back up, but we are planning to keep this incident open while we continue to monitor the NATs.Nov 27, 23:02 UTC Identified - We are seeing reduced capacity due to NAT stability issues, specifically for container-based Linux public repositories. We are in the process of preparing some network configuration and alerting changes that we expect will dramatically reduce the likelihood of this problem occurring again.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 27, 23:02 UTC Identified - We are seeing reduced capacity due to NAT stability issues, specifically for container-based Linux public repositories. We are in the process of preparing some network configuration and alerting changes that we expect will dramatically reduce the likelihood of this problem occurring again.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 27, 19:22 UTC Resolved - Capacity is back online, and we are continuing to investigate contributing factors.Nov 27, 18:50 UTC Investigating - We are operating at reduced capacity on our infrastructure for container-based Linux. At this time, only public repositories are affected.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 27, 18:50 UTC Investigating - We are operating at reduced capacity on our infrastructure for container-based Linux. At this time, only public repositories are affected.

Last Update: A few months ago

Container-based Linux reduced capacity

Nov 27, 18:50 UTC Investigating - We are operating at reduced capacity on our infrastructure for container-based Linux. At this time, only public repositories are affected.

Last Update: A few months ago

Intermittent availability drops for sudo:false infrastructure

Nov 26, 02:32 UTC Resolved - The queue has returned to expected levels for this point of the week. We’ll continue to look into the causes for this situation.Nov 26, 01:57 UTC Monitoring - Availability seems to have stabilised, we will continue to monitor the situation.Nov 26, 01:13 UTC Investigating - We are currently experiencing intermittent drops in availability for our sudo: false infrastructure. At this time, we are investigating the factors that lead to this and will provide an update as soon as we are able to. During this time, there may be requeued builds that will take longer than usual to complete.

Last Update: A few months ago

Intermittent availability drops for sudo:false infrastructure

Nov 26, 01:57 UTC Monitoring - Availability seems to have stabilised, we will continue to monitor the situation.Nov 26, 01:13 UTC Investigating - We are currently experiencing intermittent drops in availability for our sudo: false infrastructure. At this time, we are investigating the factors that lead to this and will provide an update as soon as we are able to. During this time, there may be requeued builds that will take longer than usual to complete.

Last Update: A few months ago

Intermittent availability drops for sudo:false infrastructure

Nov 26, 01:13 UTC Investigating - We are currently experiencing intermittent drops in availability for our sudo: false infrastructure. At this time, we are investigating the factors that lead to this and will provide an update as soon as we are able to. During this time, there may be requeued builds that will take longer than usual to complete.

Last Update: A few months ago

Intermittent availability drops for sudo:false infrastructure

Nov 22, 08:24 UTC Resolved - The queue is back to normal and the intermittent drops in availability have stabilized. We will continue to keep an eye on this during the day.Nov 22, 01:02 UTC Monitoring - While we are still investigating this issue, it is now less prevalent. We are now monitoring the issue as we are still looking into the contributing factors that led to the issue appearing in the first place.Nov 21, 20:24 UTC Investigating - We are currently experiencing intermittent drops in availability for our sudo: false infrastructure. At this time, we are investigating the factors that lead to this and will provide an update as soon as we are able to. During this time, there may be requeued builds that will take longer than usual to complete.

Last Update: A few months ago

Intermittent availability drops for sudo:false infrastructure

Nov 22, 01:02 UTC Monitoring - While we are still investigating this issue, it is now less prevalent. We are now monitoring the issue as we are still looking into the contributing factors that led to the issue appearing in the first place.Nov 21, 20:24 UTC Investigating - We are currently experiencing intermittent drops in availability for our sudo: false infrastructure. At this time, we are investigating the factors that lead to this and will provide an update as soon as we are able to. During this time, there may be requeued builds that will take longer than usual to complete.

Last Update: A few months ago

Intermittent availability drops for sudo:false infrastructure

Nov 21, 20:24 UTC Investigating - We are currently experiencing intermittent drops in availability for our sudo: false infrastructure. At this time, we are investigating the factors that lead to this and will provide an update as soon as we are able to. During this time, there may be requeued builds that will take longer than usual to complete.

Last Update: A few months ago

Build interruptions for builds using apt-get

Nov 16, 21:10 UTC Resolved - This incident has been resolved. Builds are processing normally.Nov 16, 20:49 UTC Monitoring - We have pushed a fix to production and are continuing to monitor to ensure this is resolved.Nov 16, 19:15 UTC Investigating - Builds relying on apt-get are currently failing due to a key change in the HHVM apt repository. We are currently working to implement a fix.

Last Update: A few months ago

Build interruptions for builds using apt-get

Nov 16, 20:49 UTC Monitoring - We have pushed a fix to production and are continuing to monitor to ensure this is resolved.Nov 16, 19:15 UTC Investigating - Builds relying on apt-get are currently failing due to a key change in the HHVM apt repository. We are currently working to implement a fix.

Last Update: A few months ago

Build interruptions for builds using apt-get

Nov 16, 19:15 UTC Investigating - Builds relying on apt-get are currently failing due to a key change in the HHVM apt repository. We are currently working to implement a fix.

Last Update: A few months ago

Networking issues in our MacOS infrastructure are causing builds to fail

Nov 9, 16:11 UTC Resolved - We've brought OSX builds for public and private repositories back to full capacity.Nov 9, 14:13 UTC Monitoring - The elevated error rate of MacOS builds seems to have stabilized. We continue to monitor the situation.Nov 9, 13:06 UTC Investigating - The maintenance work performed earlier today seems to have left some unresolved networking problems. No OSX builds are currently passing. We are investigating the situation.

Last Update: A few months ago

Networking issues in our MacOS infrastructure are causing builds to fail

Nov 9, 14:13 UTC Monitoring - The elevated error rate of MacOS builds seems to have stabilized. We continue to monitor the situation.Nov 9, 13:06 UTC Investigating - The maintenance work performed earlier today seems to have left some unresolved networking problems. No OSX builds are currently passing. We are investigating the situation.

Last Update: A few months ago

Networking issues in our MacOS infrastructure are causing builds to fail

Nov 9, 13:06 UTC Investigating - The maintenance work performed earlier today seems to have left some unresolved networking problems. No OSX builds are currently passing. We are investigating the situation.

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 11:18 UTC Resolved - Everything is back to normal. The macOS backlog has cleared. Thanks for bearing with us.Nov 9, 10:38 UTC Monitoring - MacOS builds are running again at full capacity. We are processing through our backlog, it will take a while before it is cleared. We'll continue to monitor things closely.Nov 9, 10:20 UTC Update - MacOS builds are still not running. We are working with our upstream provider to get things going again as fast as possible.Nov 9, 09:37 UTC Update - The maintenance work takes longer than expected. For more information you can check the status page: http://status.macstadium.com/incidents/qbvgd7gc7dk2Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 10:38 UTC Monitoring - MacOS builds are running again at full capacity. We are processing through our backlog, it will take a while before it is cleared. We'll continue to monitor things closely.Nov 9, 10:20 UTC Update - MacOS builds are still not running. We are working with our upstream provider to get things going again as fast as possible.Nov 9, 09:37 UTC Update - The maintenance work takes longer than expected. For more information you can check the status page: http://status.macstadium.com/incidents/qbvgd7gc7dk2Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 10:20 UTC Update - MacOS builds are still not running. We are working with our upstream provider to get things going again as fast as possible.Nov 9, 09:37 UTC Update - The maintenance work takes longer than expected. For more information you can check the status page: http://status.macstadium.com/incidents/qbvgd7gc7dk2Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 09:37 UTC Update - The maintenance work takes longer than expected. For more information you can check the status page: http://status.macstadium.com/incidents/qbvgd7gc7dk2Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 09:37 UTC Update - Maintenance work takes longer than expected. For more informations you can check the status page: http://status.macstadium.com/incidents/qbvgd7gc7dk2Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Build delays on macOS infrastructure in travis-ci.com and travis-ci.org

Nov 9, 09:00 UTC Identified - Macs builds have been put on hold due to service provider's maintenance

Last Update: A few months ago

Reduced capacity of macOS builds

Nov 7, 21:41 UTC Resolved - We've brought OSX builds for public and private repositories back to full capacity. Backlog for private builds should be gone momentarily.Nov 7, 18:54 UTC Update - We're still working to stabilize resources, Mac builds will continue to run at a reduced capacity.Nov 7, 17:11 UTC Update - Reduced capacity on our macOS infrastructure continues. We are still working on a fix and will update soon.Nov 7, 16:12 UTC Identified - We've identified an issue with macOS builds that is currently causing reduced capacity. We're working to rectify it and will update shortly.

Last Update: A few months ago

Reduced capacity of macOS builds

Nov 7, 18:54 UTC Update - We're still working to stabilize resources, Mac builds will continue to run at a reduced capacity.Nov 7, 17:11 UTC Update - Reduced capacity on our macOS infrastructure continues. We are still working on a fix and will update soon.Nov 7, 16:12 UTC Identified - We've identified an issue with macOS builds that is currently causing reduced capacity. We're working to rectify it and will update shortly.

Last Update: A few months ago

Reduced capacity of macOS builds

Nov 7, 17:11 UTC Update - Reduced capacity on our macOS infrastructure continues. We are still working on a fix and will update soon.Nov 7, 16:12 UTC Identified - We've identified an issue with macOS builds that is currently causing reduced capacity. We're working to rectify it and will update shortly.

Last Update: A few months ago

Reduced capacity of macOS builds

Nov 7, 16:12 UTC Identified - We've identified an issue with macOS builds that is currently causing reduced capacity. We're working to rectify it and will update shortly.

Last Update: A few months ago

Upstream Carrier Emergency Maintenence affecting macOS builds

Nov 3, 09:05 UTC Resolved - The maintenance is complete. Everything is operating normally.Nov 3, 08:55 UTC Update - Our capacity is back, and we are starting to process macOS builds again. We continue to monitor, to make sure things are running smoothly.Nov 3, 08:35 UTC Monitoring - An upstream maintenance is preventing open-source and private macOS builds from starting. The maintenance window is 1 hour, outages are expected to add up to 10 minutes total. We're monitoring things closely.

Last Update: A few months ago

Upstream Carrier Emergency Maintenence affecting macOS builds

Nov 3, 08:55 UTC Update - Our capacity is back, and we are starting to process macOS builds again. We continue to monitor, to make sure things are running smoothly.Nov 3, 08:35 UTC Monitoring - An upstream maintenance is preventing open-source and private macOS builds from starting. The maintenance window is 1 hour, outages are expected to add up to 10 minutes total. We're monitoring things closely.

Last Update: A few months ago

Upstream Carrier Emergency Maintenence affecting macOS builds

Nov 3, 08:35 UTC Monitoring - An upstream maintenance is preventing open-source and private macOS builds from starting. The maintenance window is 1 hour, outages are expected to add up to 10 minutes total. We're monitoring things closely.

Last Update: A few months ago

Reduced performance on private sudo: enabled repositories

Nov 2, 19:20 UTC Resolved - We have increased our available resources, and all queues are back to normal.Nov 2, 18:57 UTC Investigating - We have noticed degraded performance on private sudo: enabled repositories, and we are working to identify and resolve this issue.

Last Update: A few months ago

Reduced performance on private sudo: enabled repositories

Nov 2, 18:57 UTC Investigating - We have noticed degraded performance on private sudo: enabled repositories, and we are working to identify and resolve this issue.

Last Update: A few months ago

Decreased capacity container jobs

Nov 2, 12:53 UTC Resolved - Everything is operating normally.Nov 2, 11:06 UTC Monitoring - Our workaround is in place and new capacity has come online successfully. You should no longer see delays at this point. We're monitoring the situation closely.Nov 2, 10:22 UTC Identified - We identified a problem with pulling docker images from the registry. We're in the process of switching to another registry to mitigate the issue.Nov 2, 09:41 UTC Investigating - We are investigating issues with scaling up capacity for container builds. This will likely result in delays in processing jobs.

Last Update: A few months ago

Decreased capacity container jobs

Nov 2, 11:06 UTC Monitoring - Our workaround is in place and new capacity has come online successfully. You should no longer see delays at this point. We're monitoring the situation closely.Nov 2, 10:22 UTC Identified - We identified a problem with pulling docker images from the registry. We're in the process of switching to another registry to mitigate the issue.Nov 2, 09:41 UTC Investigating - We are investigating issues with scaling up capacity for container builds. This will likely result in delays in processing jobs.

Last Update: A few months ago

Decreased capacity container jobs

Nov 2, 10:22 UTC Identified - We identified a problem with pulling docker images from the registry. We're in the process of switching to another registry to mitigate the issue.Nov 2, 09:41 UTC Investigating - We are investigating issues with scaling up capacity for container builds. This will likely result in delays in processing jobs.

Last Update: A few months ago

Decreased capacity container jobs

Nov 2, 09:41 UTC Investigating - We are investigating issues with scaling up capacity for container builds. This will likely result in delays in processing jobs.

Last Update: A few months ago

GitHub User Sync Delays

Oct 31, 22:58 UTC Resolved - GitHub user sync tasks are processing normally.Oct 31, 20:41 UTC Investigating - We are currently investigating delays when syncing users from GitHub.

Last Update: A few months ago

GitHub User Sync Delays

Oct 31, 20:41 UTC Investigating - We are currently investigating delays when syncing users from GitHub.

Last Update: A few months ago

sudo:required Capacity Issue on Linux

Oct 31, 19:49 UTC Resolved - We have recovered capacity to the previous levels, and have seen healthy operation for the past 12 hours. Thank you for your patience.Oct 31, 05:13 UTC Monitoring - At this stage, we have restored most of our capacity and we are monitoring closely for any further signs of capacity issues occurring, and are reaching out to our partners to help isolate and prevent further recurrences.Oct 30, 23:57 UTC Update - We are currently restoring services piece by piece, and are seeing positive results at this time. However, we are still monitoring closely.Oct 30, 22:35 UTC Update - Resources are being cleared up, and a tentative estimation is that they should be cleared up in approximately one hour. We will update as soon as we have any further updates on this issue.Oct 30, 22:08 UTC Update - We are currently working on clearing up available resources so that we can bring the services back online with full capacity, and are still investigating the factors that led to this issue.Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

Open-source builds erroring

Oct 31, 09:51 UTC Resolved - Our service has recovered, everything is operating normally. 💛Oct 31, 09:01 UTC Monitoring - The build script generator is up. Public builds should be working correctly now. If you have any builds that failed please try restarting them.Oct 31, 06:51 UTC Update - The build script generator is returning errors for public builds, which causes the builds to fail. We are working with our upstream provider to resolve this situation.Oct 31, 06:39 UTC Identified - Due to an upstream issue with our infrastructure provider, new builds on travis-ci.org are not starting.

Last Update: A few months ago

Open-source builds erroring

Oct 31, 09:01 UTC Monitoring - The build script generator is up. Public builds should be working correctly now. If you have any builds that failed please try restarting them.Oct 31, 06:51 UTC Update - The build script generator is returning errors for public builds, which causes the builds to fail. We are working with our upstream provider to resolve this situation.Oct 31, 06:39 UTC Identified - Due to an upstream issue with our infrastructure provider, new builds on travis-ci.org are not starting.

Last Update: A few months ago

Open-source builds erroring

Oct 31, 06:51 UTC Update - The build script generator is returning errors for public builds, which causes the builds to fail. We are working with our upstream provider to resolve this situation.Oct 31, 06:39 UTC Identified - Due to an upstream issue with our infrastructure provider, new builds on travis-ci.org are not starting.

Last Update: A few months ago

Open-source builds not starting

Oct 31, 06:39 UTC Identified - Due to an upstream issue with our infrastructure provider, new builds on travis-ci.org are not starting.

Last Update: A few months ago

sudo:required Capacity Issue on Linux

Oct 31, 05:13 UTC Monitoring - At this stage, we have restored most of our capacity and we are monitoring closely for any further signs of capacity issues occurring, and are reaching out to our partners to help isolate and prevent further recurrences.Oct 30, 23:57 UTC Update - We are currently restoring services piece by piece, and are seeing positive results at this time. However, we are still monitoring closely.Oct 30, 22:35 UTC Update - Resources are being cleared up, and a tentative estimation is that they should be cleared up in approximately one hour. We will update as soon as we have any further updates on this issue.Oct 30, 22:08 UTC Update - We are currently working on clearing up available resources so that we can bring the services back online with full capacity, and are still investigating the factors that led to this issue.Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Linux

Oct 30, 23:57 UTC Update - We are currently restoring services piece by piece, and are seeing positive results at this time. However, we are still monitoring closely.Oct 30, 22:35 UTC Update - Resources are being cleared up, and a tentative estimation is that they should be cleared up in approximately one hour. We will update as soon as we have any further updates on this issue.Oct 30, 22:08 UTC Update - We are currently working on clearing up available resources so that we can bring the services back online with full capacity, and are still investigating the factors that led to this issue.Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Linux

Oct 30, 22:35 UTC Update - Resources are being cleared up, and a tentative estimation is that they should be cleared up in approximately one hour. We will update as soon as we have any further updates on this issue.Oct 30, 22:08 UTC Update - We are currently working on clearing up available resources so that we can bring the services back online with full capacity, and are still investigating the factors that led to this issue.Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Linux

Oct 30, 22:08 UTC Update - We are currently working on clearing up available resources so that we can bring the services back online with full capacity, and are still investigating the factors that led to this issue.Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Linux

Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Open Source Linux

Oct 30, 21:39 UTC Update - We are currently restarting all of our GCE instances to restore them to full service. We are still investigating the contributing factors leading to this issue, and the next update will follow in 30 minutes.Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Open Source Linux

Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Open Source Linux

Oct 30, 21:07 UTC Update - Due to ongoing issues, we are performing emergency maintenance on our GCE (sudo: required) infrastructure. During this time, service will be significantly slower or not function. We will provide an update within 30 minutes.Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Open Source Linux

Oct 30, 20:41 UTC Investigating - We are seeing queues once more, and are continuing to investigate the issue. At this stage, we are seeing missing capacity and are working on restoring it.Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Open Source Linux

Oct 30, 19:49 UTC Monitoring - We are seeing that the queues and backlogs are clearing up, but we are still investigating the issue for further details and to confirm it has been fully resolved.Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

sudo:required Capacity Issue on Open Source Linux

Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. This affects sudo:required (GCE) builds. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

Capacity Issue on Open Source Linux

Oct 30, 17:37 UTC Investigating - We are currently investigating a capacity issue on our open source infrastructure, related to a configuration change. We are deploying a new configuration that we hope will resolve this issue.

Last Update: A few months ago

Capacity container lags behind demand

Oct 26, 16:48 UTC Resolved - Problems with fetching larger images from Docker Hub appear to have been resolved. Thank you for your patience.Oct 26, 11:16 UTC Monitoring - We have successfully increased build processing capacity for private repositories (travis-ci.com), and have cleared the backlog of builds. We will be continuing to monitor the situation until the underlying issues with Docker Hub are resolved.Oct 26, 10:00 UTC Identified - Problems pulling images from the Docker Hub are affecting our ability to increase capacity for .com build processing. We are in the process of implementing a workaround that will allow us to scale up and handle jobs that are currently queued.Oct 26, 08:16 UTC Investigating - We see issues increasing capacity for container builds for .org and .com.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 26, 16:47 UTC Resolved - Problems with fetching larger images from Docker Hub appear to have been resolved. Thank you for your patience.Oct 26, 16:16 UTC Monitoring - The upstream incident has been resolved. We are continuing to monitor the situation.Oct 26, 13:49 UTC Identified - The issue has been acknowledged upstream: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/59f1da512cd214649ebc33b0Oct 26, 12:33 UTC Update - While problems pulling images from Docker Hub persist, we are now working directly with engineers at Docker to resolve this issue: https://github.com/docker/hub-feedback/issues/1225Oct 26, 07:41 UTC Update - We are continuing to investigate this issue with our infrastructure provider. We don't have a timescale for resolution, but we will continue updating the status over the course of the day.Oct 26, 01:20 UTC Update - We are still working to identify the contributing factors with help from our infrastructure provider. Download throughput through CloudFront host dseasb33srnrn.cloudfront.net continues to drop rapidly to 0 soon after starting transfer, sometimes temporarily recovering after several minutes. This continues to have an impact that correlates strongly with total download size.Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 26, 16:16 UTC Monitoring - The upstream incident has been resolved. We are continuing to monitor the situation.Oct 26, 13:49 UTC Identified - The issue has been acknowledged upstream: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/59f1da512cd214649ebc33b0Oct 26, 12:33 UTC Update - While problems pulling images from Docker Hub persist, we are now working directly with engineers at Docker to resolve this issue: https://github.com/docker/hub-feedback/issues/1225Oct 26, 07:41 UTC Update - We are continuing to investigate this issue with our infrastructure provider. We don't have a timescale for resolution, but we will continue updating the status over the course of the day.Oct 26, 01:20 UTC Update - We are still working to identify the contributing factors with help from our infrastructure provider. Download throughput through CloudFront host dseasb33srnrn.cloudfront.net continues to drop rapidly to 0 soon after starting transfer, sometimes temporarily recovering after several minutes. This continues to have an impact that correlates strongly with total download size.Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 26, 13:49 UTC Identified - The issue has been acknowledged upstream: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/59f1da512cd214649ebc33b0Oct 26, 12:33 UTC Update - While problems pulling images from Docker Hub persist, we are now working directly with engineers at Docker to resolve this issue: https://github.com/docker/hub-feedback/issues/1225Oct 26, 07:41 UTC Update - We are continuing to investigate this issue with our infrastructure provider. We don't have a timescale for resolution, but we will continue updating the status over the course of the day.Oct 26, 01:20 UTC Update - We are still working to identify the contributing factors with help from our infrastructure provider. Download throughput through CloudFront host dseasb33srnrn.cloudfront.net continues to drop rapidly to 0 soon after starting transfer, sometimes temporarily recovering after several minutes. This continues to have an impact that correlates strongly with total download size.Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 26, 12:33 UTC Update - While problems pulling images from Docker Hub persist, we are now working directly with engineers at Docker to resolve this issue: https://github.com/docker/hub-feedback/issues/1225Oct 26, 07:41 UTC Update - We are continuing to investigate this issue with our infrastructure provider. We don't have a timescale for resolution, but we will continue updating the status over the course of the day.Oct 26, 01:20 UTC Update - We are still working to identify the contributing factors with help from our infrastructure provider. Download throughput through CloudFront host dseasb33srnrn.cloudfront.net continues to drop rapidly to 0 soon after starting transfer, sometimes temporarily recovering after several minutes. This continues to have an impact that correlates strongly with total download size.Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Capacity container lags behind demand

Oct 26, 11:16 UTC Monitoring - We have successfully increased build processing capacity for private repositories (travis-ci.com), and have cleared the backlog of builds. We will be continuing to monitor the situation until the underlying issues with Docker Hub are resolved.Oct 26, 10:00 UTC Identified - Problems pulling images from the Docker Hub are affecting our ability to increase capacity for .com build processing. We are in the process of implementing a workaround that will allow us to scale up and handle jobs that are currently queued.Oct 26, 08:16 UTC Investigating - We see issues increasing capacity for container builds for .org and .com.

Last Update: A few months ago

Capacity container lags behind demand

Oct 26, 10:00 UTC Identified - Problems pulling images from the Docker Hub are affecting our ability to increase capacity for .com build processing. We are in the process of implementing a workaround that will allow us to scale up and handle jobs that are currently queued.Oct 26, 08:16 UTC Investigating - We see issues increasing capacity for container builds for .org and .com.

Last Update: A few months ago

Capacity container lags behind demand

Oct 26, 08:16 UTC Investigating - We see issues increasing capacity for container builds for .org and .com.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 26, 07:41 UTC Update - We are continuing to investigate this issue with our infrastructure provider. We don't have a timescale for resolution, but we will continue updating the status over the course of the day.Oct 26, 01:20 UTC Update - We are still working to identify the contributing factors with help from our infrastructure provider. Download throughput through CloudFront host dseasb33srnrn.cloudfront.net continues to drop rapidly to 0 soon after starting transfer, sometimes temporarily recovering after several minutes. This continues to have an impact that correlates strongly with total download size.Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 26, 01:20 UTC Update - We are still working to identify the contributing factors with help from our infrastructure provider. Download throughput through CloudFront host dseasb33srnrn.cloudfront.net continues to drop rapidly to 0 soon after starting transfer, sometimes temporarily recovering after several minutes. This continues to have an impact that correlates strongly with total download size.Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 25, 17:59 UTC Update - We are continuing to investigate this issue with our infrastructure provider.Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 25, 15:29 UTC Update - We are continuing to work with our infrastructure provider to identify the cause of network problems affecting communication with the Docker Hub.Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 25, 12:51 UTC Update - We've been in contact with our infrastructure provider to investigate network connectivity on sudo required builds using docker service add on.Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Network issues downloading images from Docker Hub

Oct 25, 09:47 UTC Investigating - We’re investigating reports of timeouts in builds while pulling images from Docker Hub.

Last Update: A few months ago

Delays processing build logs and build statuses for private builds in travis-ci.com

Oct 21, 08:25 UTC Resolved - This incident has been resolved.Oct 20, 16:43 UTC Update - A fix has been implemented and we are monitoring the results.Oct 20, 16:43 UTC Monitoring - Our service has recovered. Build starts and state updates should not experience any further delays. We're monitoring the situation closely.Oct 20, 09:13 UTC Update - In order to mitigate a contention in our systems, we have had to restart some workers. Your builds on the container infrastructure might have been restarted in the process. Thanks for understanding.Oct 20, 08:14 UTC Investigating - We're currently investigating reports of delays processing build logs and build statuses for private repositories. We will provide updates as soon as we have them. Thank you for your patience.

Last Update: A few months ago

Delays processing build logs and build statuses for private builds in travis-ci.com

Oct 20, 16:43 UTC Update - A fix has been implemented and we are monitoring the results.Oct 20, 16:43 UTC Monitoring - Our service has recovered. Build starts and state updates should not experience any further delays. We're monitoring the situation closely.Oct 20, 09:13 UTC Update - In order to mitigate a contention in our systems, we have had to restart some workers. Your builds on the container infrastructure might have been restarted in the process. Thanks for understanding.Oct 20, 08:14 UTC Investigating - We're currently investigating reports of delays processing build logs and build statuses for private repositories. We will provide updates as soon as we have them. Thank you for your patience.

Last Update: A few months ago

Delays processing build logs and build statuses for private builds in travis-ci.com

Oct 20, 16:43 UTC Monitoring - Our service has recovered. Build starts and state updates should not experience any further delays. We're monitoring the situation closely.Oct 20, 09:13 UTC Update - In order to mitigate a contention in our systems, we have had to restart some workers. Your builds on the container infrastructure might have been restarted in the process. Thanks for understanding.Oct 20, 08:14 UTC Investigating - We're currently investigating reports of delays processing build logs and build statuses for private repositories. We will provide updates as soon as we have them. Thank you for your patience.

Last Update: A few months ago

Build statuses aren't updating on GitHub

Oct 20, 16:11 UTC Resolved - We believe this is a different manifestation of the issue we are already tracking here: https://www.traviscistatus.com/incidents/d7y02z19k0y6. Hence, we are closing this. Sorry for the confusion.Oct 20, 14:04 UTC Investigating - We've received multiple reports stating that build statuses aren't updating on GitHub. We are investigating why this is happening. Thank you for your patience.

Last Update: A few months ago

Build statuses aren't updating on GitHub

Oct 20, 14:04 UTC Investigating - We've received multiple reports stating that build statuses aren't updating on GitHub. We are investigating why this is happening. Thank you for your patience.

Last Update: A few months ago

Delays processing build logs and build statuses for private builds in travis-ci.com

Oct 20, 09:13 UTC Update - In order to mitigate a contention in our systems, we have had to restart some workers. Your builds on the container infrastructure might have been restarted in the process. Thanks for understanding.Oct 20, 08:14 UTC Investigating - We're currently investigating reports of delays processing build logs and build statuses for private repositories. We will provide updates as soon as we have them. Thank you for your patience.

Last Update: A few months ago

Delays processing build logs and build statuses for private builds in travis-ci.com

Oct 20, 08:14 UTC Investigating - We're currently investigating reports of delays processing build logs and build statuses for private repositories. We will provide updates as soon as we have them. Thank you for your patience.

Last Update: A few months ago

Delays processing build logs and build statuses for private builds in travis-ci.com

Oct 20, 08:14 UTC Investigating - We're currently investigating reports of delays processing build logs and build statuses for private repositories. We will provide updates as soon as we have them. Thank you for your patience.

Last Update: A few months ago

Builds using sudo apt-get update failing

Oct 18, 17:05 UTC Resolved - A fix has been deployed and the issue has been resolved.Oct 18, 15:41 UTC Investigating - There is currently an issue acquiring packages when using a sudo:required build. apt-get update fails with error 401 on Trusty, and 404 on Precise. We are currently working on preparing a different source for the package files to resolve this. Please see here for a workaround: https://github.com/travis-ci/travis-ci/issues/8607

Last Update: A few months ago

Builds using sudo apt-get update failing

Oct 18, 15:41 UTC Investigating - There is currently an issue acquiring packages when using a sudo:required build. apt-get update fails with error 401 on Trusty, and 404 on Precise. We are currently working on preparing a different source for the package files to resolve this. Please see here for a workaround: https://github.com/travis-ci/travis-ci/issues/8607

Last Update: A few months ago

macOS builds for private and public repositories undergoing Infrastructure Repairs

Oct 18, 00:35 UTC Resolved - This incident has been resolved.Oct 17, 21:16 UTC Monitoring - We've made the necessary repairs and restored service to our macOS infrastructure. MacOS builds for public and private builds are running at full capacity and working through the backlog.Oct 17, 20:44 UTC Identified - We're working on infrastructure repairs to our macOS infrastructure, MacOS builds for public and private repositories continue to run at a reduced capacity. Thank you for your patience as we work to get things back up.Oct 17, 17:39 UTC Investigating - Please stand by while we rebuild some VMs in our macOS infrastructure. Builds continue to process at a reduced capacity, however we expect longer queue times during this period.

Last Update: A few months ago

macOS builds for private and public repositories undergoing Infrastructure Repairs

Oct 17, 21:16 UTC Monitoring - We've made the necessary repairs and restored service to our macOS infrastructure. MacOS builds for public and private builds are running at full capacity and working through the backlog.Oct 17, 20:44 UTC Identified - We're working on infrastructure repairs to our macOS infrastructure, MacOS builds for public and private repositories continue to run at a reduced capacity. Thank you for your patience as we work to get things back up.Oct 17, 17:39 UTC Investigating - Please stand by while we rebuild some VMs in our macOS infrastructure. Builds continue to process at a reduced capacity, however we expect longer queue times during this period.

Last Update: A few months ago

macOS builds for private and public repositories undergoing Infrastructure Repairs

Oct 17, 20:44 UTC Identified - We're working on infrastructure repairs to our macOS infrastructure, MacOS builds for public and private repositories continue to run at a reduced capacity. Thank you for your patience as we work to get things back up.Oct 17, 17:39 UTC Investigating - Please stand by while we rebuild some VMs in our macOS infrastructure. Builds continue to process at a reduced capacity, however we expect longer queue times during this period.

Last Update: A few months ago

GitHub commit delays

Oct 17, 18:28 UTC Resolved - The GitHub Commit delays have been resolved on GitHub's side, and we are not seeing any issues on our end following this.Oct 17, 16:36 UTC Monitoring - Github has noted that the queue backlog is recovering. We are monitoring for any issues on our end following this. Please see here for more details: https://status.github.com/messagesOct 17, 15:12 UTC Investigating - GitHub is currently experiencing and working on resolving an issue causing a backlog in their commits. Builds and tests might be delayed until this issue is resolved. For more details, see GitHub’s page here: https://status.github.com/messages

Last Update: A few months ago

macOS builds for private and public repositories undergoing Infrastructure Repairs

Oct 17, 17:39 UTC Investigating - Please stand by while we rebuild some VMs in our macOS infrastructure. Builds continue to process at a reduced capacity, however we expect longer queue times during this period.

Last Update: A few months ago

GitHub commit delays

Oct 17, 16:36 UTC Monitoring - Github has noted that the queue backlog is recovering. We are monitoring for any issues on our end following this. Please see here for more details: https://status.github.com/messagesOct 17, 15:12 UTC Investigating - GitHub is currently experiencing and working on resolving an issue causing a backlog in their commits. Builds and tests might be delayed until this issue is resolved. For more details, see GitHub’s page here: https://status.github.com/messages

Last Update: A few months ago

GitHub commit delays

Oct 17, 15:12 UTC Investigating - GitHub is currently experiencing and working on resolving an issue causing a backlog in their commits. Builds and tests might be delayed until this issue is resolved. For more details, see GitHub’s page here: https://status.github.com/messages

Last Update: A few months ago

Reduced capacity for public and private macOS builds

Oct 11, 15:15 UTC Resolved - We've completed the needed changes. At this time OSX builds for private and public repositories are running at full capacity. Please email support@travis-ci.com if you have any questions.Oct 11, 12:12 UTC Investigating - We are continuing needed configuration changes on our macOS infrastructure for both private and public builds. During this time, we will be reducing capacity while we complete the work. You may experience job requeues and longer wait times. We currently do not have an ETA for when we'll return to full capacity, but we're working to restore as soon as possible. Thank you for your patience.

Last Update: A few months ago

Reduced capacity for public and private macOS builds

Oct 11, 12:12 UTC Investigating - We are continuing needed configuration changes on our macOS infrastructure for both private and public builds. During this time, we will be reducing capacity while we complete the work. You may experience job requeues and longer wait times. We currently do not have an ETA for when we'll return to full capacity, but we're working to restore as soon as possible. Thank you for your patience.

Last Update: A few months ago

Unplanned: Reduced capacity for public and private macOS builds

Oct 11, 00:59 UTC Resolved - This incident has been resolved.Oct 10, 22:06 UTC Update - We've completed a portion of the needed changes today, we'll be performing more maintenance for tomorrow between 1200 and 1600 UTC . At this time OSX builds for private and public repositories are running at full capacity. Please email support@travis-ci.com if you have any questions.Oct 10, 21:49 UTC Update - We're in the process of resuming full capacity for public and private builds.Oct 10, 18:51 UTC Identified - We are taking some unplanned period of reduced capacity for both public and private macOS builds, in order to implement some configuration changes needed to ensure our builds are running reliably. During this time you may also see your macOS jobs be requeued at certain times. We do not currently have an ETA for when we'll return to full capacity but we're working to restore it as soon as possible. Thank you for your patience.

Last Update: A few months ago

Unplanned: Reduced capacity for public and private macOS builds

Oct 10, 22:06 UTC Update - We've completed a portion of the needed changes today, we'll be performing more maintenance for tomorrow between 1200 and 1600 UTC . At this time OSX builds for private and public repositories are running at full capacity. Please email support@travis-ci.com if you have any questions.Oct 10, 21:49 UTC Update - We're in the process of resuming full capacity for public and private builds.Oct 10, 18:51 UTC Identified - We are taking some unplanned period of reduced capacity for both public and private macOS builds, in order to implement some configuration changes needed to ensure our builds are running reliably. During this time you may also see your macOS jobs be requeued at certain times. We do not currently have an ETA for when we'll return to full capacity but we're working to restore it as soon as possible. Thank you for your patience.

Last Update: A few months ago

Unplanned: Reduced capacity for public and private macOS builds

Oct 10, 21:49 UTC Update - We're in the process of resuming full capacity for public and private builds.Oct 10, 18:51 UTC Identified - We are taking some unplanned period of reduced capacity for both public and private macOS builds, in order to implement some configuration changes needed to ensure our builds are running reliably. During this time you may also see your macOS jobs be requeued at certain times. We do not currently have an ETA for when we'll return to full capacity but we're working to restore it as soon as possible. Thank you for your patience.

Last Update: A few months ago

Unplanned: Reduced capacity for public and private macOS builds

Oct 10, 18:51 UTC Identified - We are taking some unplanned period of reduced capacity for both public and private macOS builds, in order to implement some configuration changes needed to ensure our builds are running reliably. During this time you may also see your macOS jobs be requeued at certain times. We do not currently have an ETA for when we'll return to full capacity but we're working to restore it as soon as possible. Thank you for your patience.

Last Update: A few months ago

Increased wait times for jobs and slow web UI on travis-ci.com

Oct 5, 15:26 UTC Resolved - Builds are running as expected and database performance has stabilised on travis-ci.com.Oct 5, 14:52 UTC Monitoring - The backlog of builds on travis-ci.com has cleared and the API and web UI are back to normal operation. We continue to monitor the situation.Oct 5, 14:04 UTC Update - Users are experiencing increased wait times for builds on travis-ci.com. We continue to investigate the issue and will post a further update within one hour.Oct 5, 13:40 UTC Investigating - We are investigating high database load and query timeouts on travis-ci.com.

Last Update: A few months ago

Increased wait times for jobs and slow web UI on travis-ci.com

Oct 5, 14:52 UTC Monitoring - The backlog of builds on travis-ci.com has cleared and the API and web UI are back to normal operation. We continue to monitor the situation.Oct 5, 14:04 UTC Update - Users are experiencing increased wait times for builds on travis-ci.com. We continue to investigate the issue and will post a further update within one hour.Oct 5, 13:40 UTC Investigating - We are investigating high database load and query timeouts on travis-ci.com.

Last Update: A few months ago

Increased wait times for jobs and slow web UI on travis-ci.com

Oct 5, 14:04 UTC Update - Users are experiencing increased wait times for builds on travis-ci.com. We continue to investigate the issue and will post a further update within one hour.Oct 5, 13:40 UTC Investigating - We are investigating high database load and query timeouts on travis-ci.com.

Last Update: A few months ago

Increased wait times for jobs and slow web UI on travis-ci.com

Oct 5, 13:40 UTC Investigating - We are investigating high database load and query timeouts on travis-ci.com.

Last Update: A few months ago

Build statuses aren't updating on GitHub

Oct 4, 20:08 UTC Resolved - We haven't received other reports of this situation happening today but we are still communicating with GitHub to try to understand why this happened yesterday. To better reflect the status of this incident, we will be closing it for now. Thank you for your patience.Oct 4, 09:47 UTC Update - The number of errors posting GitHub Status Updates has decreased over the last hours. We’re closely investigating with GitHub to get to the bottom of this issue.Oct 3, 20:05 UTC Update - We are still trying to understand why this is happening. We have reached to GitHub to see if they can help us with troubleshooting this issue. Be assured that we will keep you posted on any new development on that front. Thank you for hanging in there with us.Oct 3, 18:59 UTC Investigating - We are currently seeing build statuses not posted successfully on GitHub. We are currently trying to find the root cause of this issue. We'll update here when we know more. Thank you for your patience!

Last Update: A few months ago

Build statuses aren't updating on GitHub

Oct 4, 09:47 UTC Update - The number of errors posting GitHub Status Updates has decreased over the last hours. We’re closely investigating with GitHub to get to the bottom of this issue.Oct 3, 20:05 UTC Update - We are still trying to understand why this is happening. We have reached to GitHub to see if they can help us with troubleshooting this issue. Be assured that we will keep you posted on any new development on that front. Thank you for hanging in there with us.Oct 3, 18:59 UTC Investigating - We are currently seeing build statuses not posted successfully on GitHub. We are currently trying to find the root cause of this issue. We'll update here when we know more. Thank you for your patience!

Last Update: A few months ago

Usage and backlog spike detected for `sudo: required` builds

Oct 3, 20:56 UTC Resolved - There is no longer any backlog for GCE users. We are still investigating requeuing internally and determining the root cause of this issue, but end users should no longer be affected.Oct 3, 19:27 UTC Monitoring - The backlogs have been cleared, but we continue to keep an eye on the situation to ensure all continues as intended.Oct 3, 18:25 UTC Identified - We continue to work to lower the queues and we are seeing a decrease in backlogs. Our .org backlog appears cleared but we are still seeing some on .com. We will update once the issue is resolved or any new information is discovered.Oct 3, 16:41 UTC Update - We’ve identified that internal requeues are the source of delays for sudo: required projects. We are continuing to investigate and resolve this issue and will continue posting updates here.Oct 3, 16:21 UTC Investigating - We are experiencing an issue caused by a sudden spike of usage on GCP. We are investigating the details and will provide updates as soon as we have them. The previous issue regarding status updates delays has been cleared.

Last Update: A few months ago

Build statuses aren't updating on GitHub

Oct 3, 20:05 UTC Update - We are still trying to understand why this is happening. We have reached to GitHub to see if they can help us with troubleshooting this issue. Be assured that we will keep you posted on any new development on that front. Thank you for hanging in there with us.Oct 3, 18:59 UTC Investigating - We are currently seeing build statuses not posted successfully on GitHub. We are currently trying to find the root cause of this issue. We'll update here when we know more. Thank you for your patience!

Last Update: A few months ago

Usage and backlog spike detected for `sudo: required` builds

Oct 3, 19:27 UTC Monitoring - The backlogs have been cleared, but we continue to keep an eye on the situation to ensure all continues as intended.Oct 3, 18:25 UTC Identified - We continue to work to lower the queues and we are seeing a decrease in backlogs. Our .org backlog appears cleared but we are still seeing some on .com. We will update once the issue is resolved or any new information is discovered.Oct 3, 16:41 UTC Update - We’ve identified that internal requeues are the source of delays for sudo: required projects. We are continuing to investigate and resolve this issue and will continue posting updates here.Oct 3, 16:21 UTC Investigating - We are experiencing an issue caused by a sudden spike of usage on GCP. We are investigating the details and will provide updates as soon as we have them. The previous issue regarding status updates delays has been cleared.

Last Update: A few months ago

Build statuses aren't updating on GitHub

Oct 3, 18:59 UTC Investigating - We are currently seeing build statuses not posted successfully on GitHub. We are currently trying to find the root cause of this issue. We'll update here when we know more. Thank you for your patience!

Last Update: A few months ago

Usage and backlog spike detected for `sudo: required` builds

Oct 3, 18:25 UTC Identified - We continue to work to lower the queues and we are seeing a decrease in backlogs. Our .org backlog appears cleared but we are still seeing some on .com. We will update once the issue is resolved or any new information is discovered.Oct 3, 16:41 UTC Update - We’ve identified that internal requeues are the source of delays for sudo: required projects. We are continuing to investigate and resolve this issue and will continue posting updates here.Oct 3, 16:21 UTC Investigating - We are experiencing an issue caused by a sudden spike of usage on GCP. We are investigating the details and will provide updates as soon as we have them. The previous issue regarding status updates delays has been cleared.

Last Update: A few months ago

Usage and backlog spike detected for `sudo: required` builds

Oct 3, 16:41 UTC Update - We’ve identified that internal requeues are the source of delays for sudo: required projects. We are continuing to investigate and resolve this issue and will continue posting updates here.Oct 3, 16:21 UTC Investigating - We are experiencing an issue caused by a sudden spike of usage on GCP. We are investigating the details and will provide updates as soon as we have them. The previous issue regarding status updates delays has been cleared.

Last Update: A few months ago

Usage and backlog spike detected for `sudo: required` builds

Oct 3, 16:21 UTC Investigating - We are experiencing an issue caused by a sudden spike of usage on GCP. We are investigating the details and will provide updates as soon as we have them. The previous issue regarding status updates delays has been cleared.

Last Update: A few months ago

Usage and backlog spike detected

Oct 3, 16:21 UTC Investigating - We are experiencing an issue caused by a sudden spike of usage on GCP. We are investigating the details and will provide updates as soon as we have them. The previous issue regarding status updates delays has been cleared.

Last Update: A few months ago

Job Status Update Delays

Oct 3, 16:20 UTC Resolved - We are no longer experiencing a delay in status updates.Oct 3, 15:37 UTC Investigating - Job status processing (ie, a job going from one stage to another, like from "queued" to "running") is currently delayed, causing some builds to take longer to run and results longer to propagate. We have scaled up our processing power, and the backlog started to clear, and are investigating what is going on. This affects both travis-ci.com and travis-ci.org.

Last Update: A few months ago

Job Status Update Delays

Oct 3, 15:37 UTC Investigating - Job status processing (ie, a job going from one stage to another, like from "queued" to "running") is currently delayed, causing some builds to take longer to run and results longer to propagate. We have scaled up our processing power, and the backlog started to clear, and are investigating what is going on. This affects both travis-ci.com and travis-ci.org.

Last Update: A few months ago

Deployment issues due to missing gem

Oct 2, 21:58 UTC Resolved - The issue has been resolved and the gem is no longer missing.Oct 2, 21:14 UTC Identified - We are currently experiencing an issue with deployment due to a missing gem. The issue is currently under investigation and we will post updates as soon as they are available.

Last Update: A few months ago

Deployment issues due to missing gem

Oct 2, 21:14 UTC Identified - We are currently experiencing an issue with deployment due to a missing gem. The issue is currently under investigation and we will post updates as soon as they are available.

Last Update: A few months ago

Backlog on sudo-enabled Linux for public repositories

Sep 25, 20:31 UTC Resolved - This incident has been resolved.Sep 25, 14:29 UTC Investigating - We are currently investigating a backlog on sudo-enabled Linux for public repositories.

Last Update: A few months ago

Backlog on sudo-enabled Linux for public repositories

Sep 25, 14:29 UTC Investigating - We are currently investigating a backlog on sudo-enabled Linux for public repositories.

Last Update: A few months ago

private macOS builds delays

Sep 22, 10:58 UTC Resolved - We have worked through the accumulated backlog for private macOS builds which are now performing normally. We are continuing to monitor this closely. Thanks for your patience!Sep 22, 09:42 UTC Monitoring - In an effort to stabilize our macOS infrastructure, yesterday at 17:00 UTC we also reduced the capacity available for macOS private repositories. We've now been able to address this and since 08:40 UTC , macOS Private builds are now running as expected. Thank you for your understanding.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 22, 10:03 UTC Resolved - We've worked through the accumulated backlog for public macOS builds which are now performing normally. We are continuing to monitor this closely. Thanks for your patience!Sep 21, 09:00 UTC Update - We have worked through most of the backlog over the last few hours. We continue to investigate the root cause for instability and we’ll post an update as soon as we can.Sep 20, 15:44 UTC Update - We've completed the first round of job cancellations and are seeing some improvements. We're currently evaluating other changes to help reduce the backlog and improve the wait time for jobs starting. We'll provide more updates as we learn more. Thank you for your patience.Sep 20, 13:47 UTC Update - In an effort to clear up load on macOS servers, we are cancelling builds that have been waiting to start for more than 6 hours. This process will start at 16:15 CEST and is expected to take approximately 2 hours to complete. We will also be rolling back to a previous version of the worker to determine if recent changes have contributed to these issues.Sep 20, 10:49 UTC Update - We continue to battle with stability issues on the MacOS platform, which is the root cause of severe wait times for public repository builds. We will keep you posted about any updates. We apologize for the delays this is causing.Sep 20, 01:33 UTC Update - We have not made any significant gains in understanding the source of instability, but changes to available capacity distribution are helping to reduce the impact of disconnections.Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

private macOS builds delays

Sep 22, 09:42 UTC Monitoring - In an effort to stabilize our macOS infrastructure, yesterday at 17:00 UTC we also reduced the capacity available for macOS private repositories. We've now been able to address this and since 08:40 UTC , macOS Private builds are now running as expected. Thank you for your understanding.

Last Update: A few months ago

Sudo-enabled private repo backlog

Sep 21, 15:52 UTC Resolved - This incident has been resolved.Sep 21, 15:28 UTC Monitoring - We have incurred a backlog on our sudo-enabled private repository queue while performing a graceful restart. We expect the backlog to clear once the full capacity is back online after finishing the longest-running jobs.

Last Update: A few months ago

Sudo-enabled private repo backlog

Sep 21, 15:28 UTC Monitoring - We have incurred a backlog on our sudo-enabled private repository queue while performing a graceful restart. We expect the backlog to clear once the full capacity is back online after finishing the longest-running jobs.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 21, 09:00 UTC Update - We have worked through most of the backlog over the last few hours. We continue to investigate the root cause for instability and we’ll post an update as soon as we can.Sep 20, 15:44 UTC Update - We've completed the first round of job cancellations and are seeing some improvements. We're currently evaluating other changes to help reduce the backlog and improve the wait time for jobs starting. We'll provide more updates as we learn more. Thank you for your patience.Sep 20, 13:47 UTC Update - In an effort to clear up load on macOS servers, we are cancelling builds that have been waiting to start for more than 6 hours. This process will start at 16:15 CEST and is expected to take approximately 2 hours to complete. We will also be rolling back to a previous version of the worker to determine if recent changes have contributed to these issues.Sep 20, 10:49 UTC Update - We continue to battle with stability issues on the MacOS platform, which is the root cause of severe wait times for public repository builds. We will keep you posted about any updates. We apologize for the delays this is causing.Sep 20, 01:33 UTC Update - We have not made any significant gains in understanding the source of instability, but changes to available capacity distribution are helping to reduce the impact of disconnections.Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 20, 15:44 UTC Update - We've completed the first round of job cancellations and are seeing some improvements. We're currently evaluating other changes to help reduce the backlog and improve the wait time for jobs starting. We'll provide more updates as we learn more. Thank you for your patience.Sep 20, 13:47 UTC Update - In an effort to clear up load on macOS servers, we are cancelling builds that have been waiting to start for more than 6 hours. This process will start at 16:15 CEST and is expected to take approximately 2 hours to complete. We will also be rolling back to a previous version of the worker to determine if recent changes have contributed to these issues.Sep 20, 10:49 UTC Update - We continue to battle with stability issues on the MacOS platform, which is the root cause of severe wait times for public repository builds. We will keep you posted about any updates. We apologize for the delays this is causing.Sep 20, 01:33 UTC Update - We have not made any significant gains in understanding the source of instability, but changes to available capacity distribution are helping to reduce the impact of disconnections.Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 20, 13:47 UTC Update - In an effort to clear up load on macOS servers, we are cancelling builds that have been waiting to start for more than 6 hours. This process will start at 16:15 CEST and is expected to take approximately 2 hours to complete. We will also be rolling back to a previous version of the worker to determine if recent changes have contributed to these issues.Sep 20, 10:49 UTC Update - We continue to battle with stability issues on the MacOS platform, which is the root cause of severe wait times for public repository builds. We will keep you posted about any updates. We apologize for the delays this is causing.Sep 20, 01:33 UTC Update - We have not made any significant gains in understanding the source of instability, but changes to available capacity distribution are helping to reduce the impact of disconnections.Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 20, 10:49 UTC Update - We continue to battle with stability issues on the MacOS platform, which is the root cause of severe wait times for public repository builds. We will keep you posted about any updates. We apologize for the delays this is causing.Sep 20, 01:33 UTC Update - We have not made any significant gains in understanding the source of instability, but changes to available capacity distribution are helping to reduce the impact of disconnections.Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 20, 01:33 UTC Update - We have not made any significant gains in understanding the source of instability, but changes to available capacity distribution are helping to reduce the impact of disconnections.Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 19, 17:41 UTC Update - We are continuing to investigate the heightened AMQP timeout errors. The backlog for public OSX / mac OS jobs remains high.Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

public mac OS builds running at travis-ci.org experiencing a high rate of errors

Sep 19, 10:50 UTC Investigating - We’re investigating a higher-than-normal AMQP timeout errors affecting the throughput of our builds. This is affecting public OSX / mac OS jobs most at the moment, as this is the highest demand queue.

Last Update: A few months ago

Build delays for Linux `sudo: required` Open Source projects running at travis-ci.org

Sep 18, 21:31 UTC Resolved - This incident has been resolved.Sep 18, 20:56 UTC Monitoring - We can see that the backlog for sudo-enabled builds running on GCE has cleared. We are continuing to roll out the fix to our other infrastructures.Sep 18, 20:03 UTC Update - We've identified that a new backend change was having a unexpected negative impact on communications with a message queue and was leading to an increased backlog. We've test out a configuration change to disable this new change and it's having the positive impact we expected, so we're continue to roll it out to all parts of our infrastructure. We'll provide updates as this rollout progress.Sep 18, 17:17 UTC Investigating - We’re investigating an anomaly in demand that is causing backlog on public Linux `sudo: required` builds running at travis-ci.org

Last Update: A few months ago

Build delays for Linux `sudo: required` Open Source projects running at travis-ci.org

Sep 18, 20:56 UTC Monitoring - We can see that the backlog for sudo-enabled builds running on GCE has cleared. We are continuing to roll out the fix to our other infrastructures.Sep 18, 20:03 UTC Update - We've identified that a new backend change was having a unexpected negative impact on communications with a message queue and was leading to an increased backlog. We've test out a configuration change to disable this new change and it's having the positive impact we expected, so we're continue to roll it out to all parts of our infrastructure. We'll provide updates as this rollout progress.Sep 18, 17:17 UTC Investigating - We’re investigating an anomaly in demand that is causing backlog on public Linux `sudo: required` builds running at travis-ci.org

Last Update: A few months ago

Build delays for Linux `sudo: required` Open Source projects running at travis-ci.org

Sep 18, 20:03 UTC Update - We've identified that a new backend change was having a unexpected negative impact on communications with a message queue and was leading to an increased backlog. We've test out a configuration change to disable this new change and it's having the positive impact we expected, so we're continue to roll it out to all parts of our infrastructure. We'll provide updates as this rollout progress.Sep 18, 17:17 UTC Investigating - We’re investigating an anomaly in demand that is causing backlog on public Linux `sudo: required` builds running at travis-ci.org

Last Update: A few months ago

Build delays for Linux `sudo: required` Open Source projects running at travis-ci.org

Sep 18, 17:17 UTC Investigating - We’re investigating an anomaly in demand that is causing backlog on public Linux `sudo: required` builds running at travis-ci.org

Last Update: A few months ago

Build delays for Linux Open Source projects running at travis-ci.org

Sep 18, 17:17 UTC Investigating - We’re investigating an anomaly in demand that is causing backlog on public Linux builds running at travis-ci.org

Last Update: A few months ago

[Scheduled] Database upgrade on travis-ci.org and travis-ci.com

Sep 17, 10:50 UTC Completed - The maintenance is complete, thanks for bearing with us! 💛Sep 17, 10:02 UTC Update - Maintenance of travis-ci.com is complete and we continuing processing of jobs. We are now beginning maintenance on travis-ci.org.Sep 17, 08:58 UTC In progress - We are beginning our scheduled maintenance on travis-ci.com and travis-ci.org.Sep 11, 16:19 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.org and travis-ci.com on Sunday, 18 September, 2017 From 09.00 AM UTC to 12:00 PM UTC . We expect the API and web interface to be unavailable for some of that time window on both platforms. Processing of public and private builds is also expected to be delayed.

Last Update: A few months ago

[Scheduled] Database upgrade on travis-ci.org and travis-ci.com

Sep 17, 10:02 UTC Update - Maintenance of travis-ci.com is complete and we continuing processing of jobs. We are now beginning maintenance on travis-ci.org.Sep 17, 08:58 UTC In progress - We are beginning our scheduled maintenance on travis-ci.com and travis-ci.org.Sep 11, 16:19 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.org and travis-ci.com on Sunday, 18 September, 2017 From 09.00 AM UTC to 12:00 PM UTC . We expect the API and web interface to be unavailable for some of that time window on both platforms. Processing of public and private builds is also expected to be delayed.

Last Update: A few months ago

[Scheduled] Database upgrade on travis-ci.org and travis-ci.com

Sep 17, 08:58 UTC In progress - We are beginning our scheduled maintenance on travis-ci.com and travis-ci.org.Sep 11, 16:19 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.org and travis-ci.com on Sunday, 18 September, 2017 From 09.00 AM UTC to 12:00 PM UTC . We expect the API and web interface to be unavailable for some of that time window on both platforms. Processing of public and private builds is also expected to be delayed.

Last Update: A few months ago

AWS S3 us-east-1 issues affecting build caching, artifacts, and build logs

Sep 14, 20:34 UTC Resolved - AWS S3 issues have been resolved and all services are operating normally.Sep 14, 19:08 UTC Identified - AWS is reporting issues with S3 in us-east-1, 11:58 AM PDT We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region. WE can confirm we're seeing these issues as well. While S3 is unstable you'll see errors from build caching/artifacts activities and may have trouble accessing older build logs, which are stored in S3 long term. We will provide updates as we learn more.

Last Update: A few months ago

Build delays for private builds/travis-ci.com caused by the previous API issue

Sep 14, 20:08 UTC Resolved - A tiny backlog remains for private Mac builds but it should be cleared in the next 30 minutes. Hence, we are resolving this incident for now. Thank you for your enduring patience!Sep 14, 18:54 UTC Update - The backlog for container-based (i.e. sudo: false) Linux builds has cleared. A backlog remains for Mac builds and we will update you when it's cleared. Thank you!Sep 14, 18:39 UTC Update - We are happy to report that the backlog has cleared for sudo-enabled Linux builds. Small backlogs remain for container-based Linux and Mac builds.Sep 14, 18:23 UTC Monitoring - We are sorry to inform you that the previous incident (https://www.traviscistatus.com/incidents/4gy46v0t3vrq), although it's fixed, resulted in a backlog for private builds. Hence you might experience some delays with your builds. Sorry for the inconvenience. We are monitoring things closely and we will update with the state of the backlog on our different infrastructures in a timely manner. Thank you for your patience!

Last Update: A few months ago

AWS S3 us-east-1 issues affecting build caching, artifacts, and build logs

Sep 14, 19:08 UTC Identified - AWS is reporting issues with S3 in us-east-1, 11:58 AM PDT We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region. WE can confirm we're seeing these issues as well. While S3 is unstable you'll see errors from build caching/artifacts activities and may have trouble accessing older build logs, which are stored in S3 long term. We will provide updates as we learn more.

Last Update: A few months ago

AWS S3 us-east-1 issues affecting build caching, artificats, and build logs

Sep 14, 19:08 UTC Identified - AWS is reporting issues with S3 in us-east-1, 11:58 AM PDT We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region. WE can confirm we're seeing these issues as well. While S3 is unstable you'll see errors from build caching/artifacts activities and may have trouble accessing older build logs, which are stored in S3 long term. We will provide updates as we learn more.

Last Update: A few months ago

Build delays for private builds/travis-ci.com caused by the previous API issue

Sep 14, 18:54 UTC Update - The backlog for container-based (i.e. sudo: false) Linux builds has cleared. A backlog remains for Mac builds and we will update you when it's cleared. Thank you!Sep 14, 18:39 UTC Update - We are happy to report that the backlog has cleared for sudo-enabled Linux builds. Small backlogs remain for container-based Linux and Mac builds.Sep 14, 18:23 UTC Monitoring - We are sorry to inform you that the previous incident (https://www.traviscistatus.com/incidents/4gy46v0t3vrq), although it's fixed, resulted in a backlog for private builds. Hence you might experience some delays with your builds. Sorry for the inconvenience. We are monitoring things closely and we will update with the state of the backlog on our different infrastructures in a timely manner. Thank you for your patience!

Last Update: A few months ago

Build delays for private builds/travis-ci.com caused by the previous API issue

Sep 14, 18:39 UTC Update - We are happy to report that the backlog has cleared for sudo-enabled Linux builds. Small backlogs remain for container-based Linux and Mac builds.Sep 14, 18:23 UTC Monitoring - We are sorry to inform you that the previous incident (https://www.traviscistatus.com/incidents/4gy46v0t3vrq), although it's fixed, resulted in a backlog for private builds. Hence you might experience some delays with your builds. Sorry for the inconvenience. We are monitoring things closely and we will update with the state of the backlog on our different infrastructures in a timely manner. Thank you for your patience!

Last Update: A few months ago

Build delays for private builds/travis-ci.com caused by the previous API issue

Sep 14, 18:23 UTC Monitoring - We are sorry to inform you that the previous incident (https://www.traviscistatus.com/incidents/4gy46v0t3vrq), although it's fixed, resulted in a backlog for private builds. Hence you might experience some delays with your builds. Sorry for the inconvenience. We are monitoring things closely and we will update with the state of the backlog on our different infrastructures in a timely manner. Thank you for your patience!

Last Update: A few months ago

Travis API for .com Private Builds

Sep 14, 17:36 UTC Resolved - Travis API for .com private builds is back up. Our redis service had to be migrated due to a hardware failure on AWS in the us-east-1 region.Sep 14, 17:21 UTC Identified - Our API for .com builds is down due to our Redis instances being unavailable. We are working with our 3rd party redis hosting service.

Last Update: A few months ago

Travis API for .com Private Builds

Sep 14, 17:21 UTC Identified - Our API for .com builds is down due to our Redis instances being unavailable. We are working with our 3rd party redis hosting service.

Last Update: A few months ago

Delays in log processing on private builds

Sep 13, 19:06 UTC Resolved - Log processes have stabilized for private builds on travis-ci.comSep 13, 15:30 UTC Investigating - We are investigating delays in log processing for private builds on travis-ci.com. Log streaming is unaffected.

Last Update: A few months ago

Delays in log processing on private builds

Sep 13, 15:30 UTC Investigating - We are investigating delays in log processing for private builds on travis-ci.com. Log streaming is unaffected.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 11, 21:11 UTC Resolved - The public macOS build backlog has reached normal peak levels and things are remaining stable. We're closing the incident at this time. A postmortem blog post will be published in the next few days and we'll share it on Twitter when it's published. Thank you everyone for your patience and understanding during this extended incident.Sep 11, 19:52 UTC Update - The backlog has cleared for private builds. We are continuing to monitor the situation for public/open source builds. Thanks for hanging in there with us.Sep 11, 16:50 UTC Update - We're resuming full private macOS build capacity.Sep 11, 15:47 UTC Update - We're seeing some instability with some of the private macOS build capacity and so we're reducing capacity temporarily.Sep 11, 03:57 UTC Monitoring - We've resumed full build capacity for public builds. We will be monitoring things overnight and will provide further updates in the morning PDT . Thank you for your patience.Sep 11, 03:45 UTC Update - We've completed the first phase of our SAN cleanup. Things are stable and so we're working to resume full public macOS build capacity. We'll provide another update when that's complete.Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 11, 19:52 UTC Update - The backlog has cleared for private builds. We are continuing to monitor the situation for public/open source builds. Thanks for hanging in there with us.Sep 11, 16:50 UTC Update - We're resuming full private macOS build capacity.Sep 11, 15:47 UTC Update - We're seeing some instability with some of the private macOS build capacity and so we're reducing capacity temporarily.Sep 11, 03:57 UTC Monitoring - We've resumed full build capacity for public builds. We will be monitoring things overnight and will provide further updates in the morning PDT . Thank you for your patience.Sep 11, 03:45 UTC Update - We've completed the first phase of our SAN cleanup. Things are stable and so we're working to resume full public macOS build capacity. We'll provide another update when that's complete.Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 11, 16:50 UTC Update - We're resuming full private macOS build capacity.Sep 11, 15:47 UTC Update - We're seeing some instability with some of the private macOS build capacity and so we're reducing capacity temporarily.Sep 11, 03:57 UTC Monitoring - We've resumed full build capacity for public builds. We will be monitoring things overnight and will provide further updates in the morning PDT . Thank you for your patience.Sep 11, 03:45 UTC Update - We've completed the first phase of our SAN cleanup. Things are stable and so we're working to resume full public macOS build capacity. We'll provide another update when that's complete.Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

[Scheduled] Database upgrade on travis-ci.org and travis-ci.com

Sep 11, 16:19 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.org and travis-ci.com on Sunday, 18 September, 2017 From 09.00 AM UTC to 12:00 PM UTC . We expect the API and web interface to be unavailable for some of that time window on both platforms. Processing of public and private builds is also expected to be delayed.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 11, 15:47 UTC Update - We're seeing some instability with some of the private macOS build capacity and so we're reducing capacity temporarily.Sep 11, 03:57 UTC Monitoring - We've resumed full build capacity for public builds. We will be monitoring things overnight and will provide further updates in the morning PDT . Thank you for your patience.Sep 11, 03:45 UTC Update - We've completed the first phase of our SAN cleanup. Things are stable and so we're working to resume full public macOS build capacity. We'll provide another update when that's complete.Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 11, 03:57 UTC Monitoring - We've resumed full build capacity for public builds. We will be monitoring things overnight and will provide further updates in the morning PDT . Thank you for your patience.Sep 11, 03:45 UTC Update - We've completed the first phase of our SAN cleanup. Things are stable and so we're working to resume full public macOS build capacity. We'll provide another update when that's complete.Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 11, 03:45 UTC Update - We've completed the first phase of our SAN cleanup. Things are stable and so we're working to resume full public macOS build capacity. We'll provide another update when that's complete.Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 10, 19:56 UTC Update - We're now running with the previous capacity for public builds, which is still reduced from our "normal" capacity. We are continuing with SAN cleanup. We'll provide updates as things progress today. Thank you for your patience.Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 10, 19:45 UTC Update - We temporarily have additional reduced capacity for public builds, as we take some actions to continue with our SAN cleanup. We'll provide another update when that capacity has been restored.Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 10, 16:45 UTC Update - We've processed a backlog of approximately 9,600 macOS jobs for public repositories since re-enabling public macOS builds at 07:00 PDT yesterday. As we're still at reduced capacity and working on cleaning the SAN, we still have a backlog of ~150-200 jobs and continue to actively process them. We'll provide updates as things progress today. Thank you for your patience.Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 9, 15:11 UTC Update - We're continuing to process the public backlog while running SAN cleanup. We may still need to reduce or suspend public builds later in the weekend, depending on SAN progress. Thank you for your patience.Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 9, 14:32 UTC Update - Capacity for macOS public repositories has been back online for ~1 hr. We're bumping additional capacity to work through the backlog.Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 9, 13:04 UTC Update - The backlog for private repository builds has been clear for ~4h. We are planning to bring partial capacity for public repositories back online shortly.Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 9, 03:13 UTC Update - We've resumed running private builds at this time. We'll provide further updates on the overall progress tomorrow morning PDT . Thank you for your patience.Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 9, 01:28 UTC Update - We ran into an issue with booting Xcode 8.x images, so all builds are suspended again. We'll update when private builds are running.Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 9, 01:04 UTC Update - In order to help things become stable and reliable going forward, we're undertaking intense cleanup of our SAN filesystem. This cleanup is likely to take all weekend. Because of this, we're only able to resume a portion of our capacity for private builds and will not be resuming shared public builds yet. We do not currently have an ETA for when we'll be able to resume shared public builds. We will provide our next update in the morning PDT . We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 21:29 UTC Update - We're working on stabilization cleanup for our SAN storage. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 19:52 UTC Update - We're continuing to work on getting things into a stable state where we can potentially start running builds. At the moment we do not have an ETA for when we will resume builds. We are very sorry for the delays and will update this incident when we know more. Thank you for your patience.Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 16:52 UTC Update - We've rebooted our vCenters and continue to work on stabilizing things. All macOS builds remain stopped thank you for patience.Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 15:01 UTC Update - macOS jobs for public and private repositories builds are stopped. We're currently working with our infrastructure provider to reboot one of our vCenter instances to work out unresponsive SAN issues.Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 12:34 UTC Update - We are stopping all macOS jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 12:34 UTC Update - We are stopping all jobs because we have run out of space on our datacenter's SAN.Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 11:07 UTC Identified - We've identified an issue with some of our Xcode image hosts, causing macOS requeues on both public and private repositories. We're working together with our upstream provider to sort this out while we continue investigating macOS build timeouts.Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 10:32 UTC Investigating - We continue investigating mac OS requeues and build timeouts for both public and private repositories. This seems to be related to SAN performance, we'll continue posting updates as we work to get a more stable performance.Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 8, 08:56 UTC Monitoring - The stability of our macOS builds seems to have improved. We will continue to monitor the rate of errored builds. Thank you for you patience.Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Build delays due to GitHub outage

Sep 8, 02:27 UTC Resolved - We have fully recovered from the GitHub outage.Sep 7, 23:57 UTC Monitoring - We have mostly recovered from the GitHub outage, except for a draining user sync queue.Sep 7, 21:23 UTC Identified - We are currently experiencing delayed builds to GitHub delivering atypically few webhooks.

Last Update: A few months ago

Build delays due to GitHub outage

Sep 7, 23:57 UTC Monitoring - We have mostly recovered from the GitHub outage, except for a draining user sync queue.Sep 7, 21:23 UTC Identified - We are currently experiencing delayed builds to GitHub delivering atypically few webhooks.

Last Update: A few months ago

Build delays due to GitHub outage

Sep 7, 21:23 UTC Identified - We are currently experiencing delayed builds to GitHub delivering atypically few webhooks.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing an increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 7, 09:36 UTC Investigating - Repositories running on travis-ci.org and travis-ci.com are experiencing in increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Increased error rates on macOS builds

Sep 7, 09:36 UTC Investigating - Users on travis-ci.org and travis-ci.com are experiencing in increase in errored builds. We are investigating and will update as soon as we can.

Last Update: A few months ago

Job requeues on macOS builds

Sep 7, 03:01 UTC Resolved - We've gone from approximately 2800 to 1400 builds in the public build backlog, so we're expecting things to clear up by the morning. If things do not and we need to take further action, we will open a new incident.Sep 6, 22:59 UTC Monitoring - We've nearly caught up with the backlog for private builds. The backlog for public builds will likely clear over night and so we'll continue to monitor things into tomorrow and re-evaluate if we feel we need to cancel any builds tomorrow. Thank you for your patience again.Sep 6, 22:01 UTC Update - We're still recovering things. We're catching up on the private repository backlog but still have a very large public repository backlog. We'll provide more updates as things develop. Thank you for your patience.Sep 6, 19:36 UTC Update - We've been able to stabilize things enough that we're bringing on some more capacity. We're still working on stabilizing everything and will provide updates as things developer. Thank you for your patience.Sep 6, 17:45 UTC Update - The previous message about Xcode images being unavailable was incorrect and has been removed.Sep 6, 17:44 UTC Identified - The host that owns several of our Xcode images has gone offline. We will be shutting down 50% of our capacity to perform emergency maintenance. Expecting longer wait times for OSX/MacOS builds. Sorry for the inconvenience.Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 22:59 UTC Monitoring - We've nearly caught up with the backlog for private builds. The backlog for public builds will likely clear over night and so we'll continue to monitor things into tomorrow and re-evaluate if we feel we need to cancel any builds tomorrow. Thank you for your patience again.Sep 6, 22:01 UTC Update - We're still recovering things. We're catching up on the private repository backlog but still have a very large public repository backlog. We'll provide more updates as things develop. Thank you for your patience.Sep 6, 19:36 UTC Update - We've been able to stabilize things enough that we're bringing on some more capacity. We're still working on stabilizing everything and will provide updates as things developer. Thank you for your patience.Sep 6, 17:45 UTC Update - The previous message about Xcode images being unavailable was incorrect and has been removed.Sep 6, 17:44 UTC Identified - The host that owns several of our Xcode images has gone offline. We will be shutting down 50% of our capacity to perform emergency maintenance. Expecting longer wait times for OSX/MacOS builds. Sorry for the inconvenience.Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 22:01 UTC Update - We're still recovering things. We're catching up on the private repository backlog but still have a very large public repository backlog. We'll provide more updates as things develop. Thank you for your patience.Sep 6, 19:36 UTC Update - We've been able to stabilize things enough that we're bringing on some more capacity. We're still working on stabilizing everything and will provide updates as things developer. Thank you for your patience.Sep 6, 17:45 UTC Update - The previous message about Xcode images being unavailable was incorrect and has been removed.Sep 6, 17:44 UTC Identified - The host that owns several of our Xcode images has gone offline. We will be shutting down 50% of our capacity to perform emergency maintenance. Expecting longer wait times for OSX/MacOS builds. Sorry for the inconvenience.Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 19:36 UTC Update - We've been able to stabilize things enough that we're bringing on some more capacity. We're still working on stabilizing everything and will provide updates as things developer. Thank you for your patience.Sep 6, 17:45 UTC Update - The previous message about Xcode images being unavailable was incorrect and has been removed.Sep 6, 17:44 UTC Identified - The host that owns several of our Xcode images has gone offline. We will be shutting down 50% of our capacity to perform emergency maintenance. Expecting longer wait times for OSX/MacOS builds. Sorry for the inconvenience.Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 17:45 UTC Update - Xcode 8.2, 8.3, and 9 images are completely unavailable during this partial outage for OSX/MacOS builds.Sep 6, 17:44 UTC Identified - The host that owns several of our Xcode images has gone offline. We will be shutting down 50% of our capacity to perform emergency maintenance. Expecting longer wait times for OSX/MacOS builds. Sorry for the inconvenience.Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 17:44 UTC Identified - The host that owns several of our Xcode images has gone offline. We will be shutting down 50% of our capacity to perform emergency maintenance. Expecting longer wait times for OSX/MacOS builds. Sorry for the inconvenience.Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 15:15 UTC Update - We continue to work towards to clearing the macOS backlog and stabilising our network.Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 10:45 UTC Update - In addition to longer boot times, users are experiencing an increase in errored builds due to log timeouts when running macOS builds. We are investigating networking issues and will update again as soon as we know more.Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Job requeues on macOS builds

Sep 6, 09:00 UTC Investigating - We’re investigating an increased rate of internal restarts of macOS builds, resulting in longer boot times for both public and private repositories. This has resulted in an increased backlog for macOS builds at travis-ci.org

Last Update: A few months ago

Reduced OSX capacity on .com

Aug 23, 21:11 UTC Resolved - The backlog for OSX private .com builds has cleared.Aug 23, 15:59 UTC Monitoring - OSX infrastructure operating at full capacity. Jobs are processing normally, thank you for your patience as we clear the backlog.Aug 23, 15:21 UTC Update - We discovered several of our DCHP hosts for OSX builds were down. This has had cascading effects on resources, ultimately requiring us to stop jobs while we work to stabilize our infrastructure. We will bring hosts back up one-by-one and start up job VMs momentarily.Aug 23, 13:21 UTC Identified - To address the infrastructure instability and reduced capacity issues on OSX builds, we need to do an emergency maintenance bringing all running OSX builds down. The jobs will be restarted as soon as there is a slot free.Aug 23, 09:12 UTC Investigating - We are seeing a decreased capacity for OSX builds on .com.

Last Update: A few months ago

Reduced OSX capacity on .com

Aug 23, 15:59 UTC Monitoring - OSX infrastructure operating at full capacity. Jobs are processing normally, thank you for your patience as we clear the backlog.Aug 23, 15:21 UTC Update - We discovered several of our DCHP hosts for OSX builds were down. This has had cascading effects on resources, ultimately requiring us to stop jobs while we work to stabilize our infrastructure. We will bring hosts back up one-by-one and start up job VMs momentarily.Aug 23, 13:21 UTC Identified - To address the infrastructure instability and reduced capacity issues on OSX builds, we need to do an emergency maintenance bringing all running OSX builds down. The jobs will be restarted as soon as there is a slot free.Aug 23, 09:12 UTC Investigating - We are seeing a decreased capacity for OSX builds on .com.

Last Update: A few months ago

Reduced OSX capacity on .com

Aug 23, 15:21 UTC Update - We discovered several of our DCHP hosts for OSX builds were down. This has had cascading effects on resources, ultimately requiring us to stop jobs while we work to stabilize our infrastructure. We will bring hosts back up one-by-one and start up job VMs momentarily.Aug 23, 13:21 UTC Identified - To address the infrastructure instability and reduced capacity issues on OSX builds, we need to do an emergency maintenance bringing all running OSX builds down. The jobs will be restarted as soon as there is a slot free.Aug 23, 09:12 UTC Investigating - We are seeing a decreased capacity for OSX builds on .com.

Last Update: A few months ago

Reduced OSX capacity on .com

Aug 23, 13:21 UTC Identified - To address the infrastructure instability and reduced capacity issues on OSX builds, we need to do an emergency maintenance bringing all running OSX builds down. The jobs will be restarted as soon as there is a slot free.Aug 23, 09:12 UTC Investigating - We are seeing a decreased capacity for OSX builds on .com.

Last Update: A few months ago

Reduced OSX capacity on .com

Aug 23, 13:21 UTC Identified - To address the infrastructure instability and reduced capacity issues on OSX builds, we need to do an emergency maintenance bringing all running OSX builds down. The jobs will be restarted as soon as there is a slot free.Aug 23, 09:12 UTC Investigating - We are seeing a decreased capacity for OSX builds on .com.

Last Update: A few months ago

Reduced OSX capacity on .com

Aug 23, 09:12 UTC Investigating - We are seeing a decreased capacity for OSX builds on .com.

Last Update: A few months ago

Build requests and sign in via GitHub slow or unavailable

Aug 21, 17:14 UTC Resolved - There is a backlog remaining for macOS public repositories that is typical for this time of day/week. Thanks for your patience!Aug 21, 15:29 UTC Monitoring - Build queues on our docker infrastructure have cleared. Mac queues continue to experience delays.Aug 21, 15:03 UTC Update - We have finished processing the backlog of build requests and are in the process of scaling out extra capacity to deal with the influx of builds.Aug 21, 14:31 UTC Update - GitHub have identified and addressed the source of connectivity issues. We are beginning to process the backlog of build requests.Aug 21, 13:25 UTC Identified - We are seeing increased response times on GitHub’s API in several components, causing sign-in on Travis CI to fail, build requests being delayed, and syncing accounts being slow.

Last Update: A few months ago

Build requests and sign in via GitHub slow or unavailable

Aug 21, 15:29 UTC Monitoring - Build queues on our docker infrastructure have cleared. Mac queues continue to experience delays.Aug 21, 15:03 UTC Update - We have finished processing the backlog of build requests and are in the process of scaling out extra capacity to deal with the influx of builds.Aug 21, 14:31 UTC Update - GitHub have identified and addressed the source of connectivity issues. We are beginning to process the backlog of build requests.Aug 21, 13:25 UTC Identified - We are seeing increased response times on GitHub’s API in several components, causing sign-in on Travis CI to fail, build requests being delayed, and syncing accounts being slow.

Last Update: A few months ago

Build requests and sign in via GitHub slow or unavailable

Aug 21, 15:03 UTC Update - We have finished processing the backlog of build requests and are in the process of scaling out extra capacity to deal with the influx of builds.Aug 21, 14:31 UTC Update - GitHub have identified and addressed the source of connectivity issues. We are beginning to process the backlog of build requests.Aug 21, 13:25 UTC Identified - We are seeing increased response times on GitHub’s API in several components, causing sign-in on Travis CI to fail, build requests being delayed, and syncing accounts being slow.

Last Update: A few months ago

Build requests and sign in via GitHub slow or unavailable

Aug 21, 14:31 UTC Update - GitHub have identified and addressed the source of connectivity issues. We are beginning to process the backlog of build requests.Aug 21, 13:25 UTC Identified - We are seeing increased response times on GitHub’s API in several components, causing sign-in on Travis CI to fail, build requests being delayed, and syncing accounts being slow.

Last Update: A few months ago

Build requests and sign in via GitHub slow or unavailable

Aug 21, 13:25 UTC Identified - We are seeing increased response times on GitHub’s API in several components, causing sign-in on Travis CI to fail, build requests being delayed, and syncing accounts being slow.

Last Update: A few months ago

Build requests and sign in via GitHub slow or unavailable

Aug 21, 13:25 UTC Identified - We are seeing increased response times on GitHub’s API in several components, causing sign-in on Travis CI to fail, build requests being delayed, and syncing accounts being slow.

Last Update: A few months ago

[Planned] macOS Infrastructure Network Maintenance

Aug 17, 01:48 UTC Completed - Maintenance has been completed.Aug 17, 01:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 11, 17:32 UTC Scheduled - We'll be implementing and testing some changes to a portion our macOS networking infrastructure, to help improve build performance. During this maintenance users will experience reduced build capacity for both public and private repository builds. We do not expect needing to take things entirely offline. If you have any questions, please email support@travis-ci.com

Last Update: A few months ago

[Planned] macOS Infrastructure Network Maintenance

Aug 17, 01:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 11, 17:32 UTC Scheduled - We'll be implementing and testing some changes to a portion our macOS networking infrastructure, to help improve build performance. During this maintenance users will experience reduced build capacity for both public and private repository builds. We do not expect needing to take things entirely offline. If you have any questions, please email support@travis-ci.com

Last Update: A few months ago

[Planned] macOS Infrastructure Network Maintenance

Aug 11, 17:32 UTC Scheduled - We'll be implementing and testing some changes to a portion our macOS networking infrastructure, to help improve build performance. During this maintenance users will experience reduced build capacity for both public and private repository builds. We do not expect needing to take things entirely offline. If you have any questions, please email support@travis-ci.com

Last Update: A few months ago

Partial Reduction on Capacity for Private MacOS builds.

Aug 9, 20:16 UTC Resolved - At this time things are operating in a stable fashion. Please email support@travis-ci.com if you are still seeing any issues.Aug 9, 19:16 UTC Monitoring - A fix has been implemented and we are monitoring the results.Aug 9, 19:12 UTC Update - We've been able to remove the problem host and have restored full capacity for private macOS builds. We are monitoring things closely.Aug 9, 18:31 UTC Identified - We are responding to downed hosts servicing our private MacOS builds. Builds operating at reduced capacity, some wait time is expected.

Last Update: A few months ago

Partial Reduction on Capacity for Private MacOS builds.

Aug 9, 19:16 UTC Monitoring - A fix has been implemented and we are monitoring the results.Aug 9, 19:12 UTC Update - We've been able to remove the problem host and have restored full capacity for private macOS builds. We are monitoring things closely.Aug 9, 18:31 UTC Identified - We are responding to downed hosts servicing our private MacOS builds. Builds operating at reduced capacity, some wait time is expected.

Last Update: A few months ago

Partial Reduction on Capacity for Private MacOS builds.

Aug 9, 19:12 UTC Update - We've been able to remove the problem host and have restored full capacity for private macOS builds. We are monitoring things closely.Aug 9, 18:31 UTC Identified - We are responding to downed hosts servicing our private MacOS builds. Builds operating at reduced capacity, some wait time is expected.

Last Update: A few months ago

Partial Reduction on Capacity for Private MacOS builds.

Aug 9, 18:31 UTC Identified - We are responding to downed hosts servicing our private MacOS builds. Builds operating at reduced capacity, some wait time is expected.

Last Update: A few months ago

Partial Reduction on Capacity for Private MacOS builds.

Aug 9, 18:31 UTC Identified - We are responding to downed hosts servicing our private MacOS builds. Builds operating at reduced capacity, some wait time is expected.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 9, 02:15 UTC Resolved - We're running production builds again at full capacity. We've accrued a large backlog of public macOS builds and we're processing them, but it will be a few hours before the backlog is cleared. We'll continue to monitor things closely. Thank you for your patience while we worked to resolve this incident.Aug 9, 01:57 UTC Update - We believe we've identified the issue and are beginning to test running jobs in the portion of our infrastructure that was having issues. We'll provide an update as we start to run production builds through it.Aug 9, 01:30 UTC Update - We've begun experiencing a new set of errors that is preventing us from restoring full capacity. We are investigating.Aug 9, 01:03 UTC Update - We've been able to get things cleanly restarted and we're beginning to ramp back up to full capacity. We'll post an update once we've ramped back up. Thanks for your patience.Aug 9, 00:14 UTC Update - We're, unfortunately, running into some unexpected errors and are working to resolve them. We'll provide another update within 60 minutes.Aug 8, 23:24 UTC Update - We're in the process of restarting some components. We'll provide another update within 60 minutes.Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 9, 01:57 UTC Update - We believe we've identified the issue and are beginning to test running jobs in the portion of our infrastructure that was having issues. We'll provide an update as we start to run production builds through it.Aug 9, 01:30 UTC Update - We've begun experiencing a new set of errors that is preventing us from restoring full capacity. We are investigating.Aug 9, 01:03 UTC Update - We've been able to get things cleanly restarted and we're beginning to ramp back up to full capacity. We'll post an update once we've ramped back up. Thanks for your patience.Aug 9, 00:14 UTC Update - We're, unfortunately, running into some unexpected errors and are working to resolve them. We'll provide another update within 60 minutes.Aug 8, 23:24 UTC Update - We're in the process of restarting some components. We'll provide another update within 60 minutes.Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 9, 01:30 UTC Update - We've begun experiencing a new set of errors that is preventing us from restoring full capacity. We are investigating.Aug 9, 01:03 UTC Update - We've been able to get things cleanly restarted and we're beginning to ramp back up to full capacity. We'll post an update once we've ramped back up. Thanks for your patience.Aug 9, 00:14 UTC Update - We're, unfortunately, running into some unexpected errors and are working to resolve them. We'll provide another update within 60 minutes.Aug 8, 23:24 UTC Update - We're in the process of restarting some components. We'll provide another update within 60 minutes.Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 9, 01:03 UTC Update - We've been able to get things cleanly restarted and we're beginning to ramp back up to full capacity. We'll post an update once we've ramped back up. Thanks for your patience.Aug 9, 00:14 UTC Update - We're, unfortunately, running into some unexpected errors and are working to resolve them. We'll provide another update within 60 minutes.Aug 8, 23:24 UTC Update - We're in the process of restarting some components. We'll provide another update within 60 minutes.Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 9, 00:14 UTC Update - We're, unfortunately, running into some unexpected errors and are working to resolve them. We'll provide another update within 60 minutes.Aug 8, 23:24 UTC Update - We're in the process of restarting some components. We'll provide another update within 60 minutes.Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 8, 23:24 UTC Update - We're in the process of restarting some components. We'll provide another update within 60 minutes.Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 8, 22:33 UTC Update - We're continuing to work on stabilizing things. We'll provide another update within 60 minutes.Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 8, 21:42 UTC Identified - We're currently working to stabilize things. We'll provide another update within 60 minutes. Thank you for your patience.Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Reduced macOS capacity for public and private builds.

Aug 8, 21:09 UTC Investigating - We are currently seeing instability in part of our macOS infrastructure. This is resulting in delays for both public and private macOS builds. We're investigating the issue.

Last Update: A few months ago

Delays receiving events from GitHub for both public and private repositories

Jul 31, 20:41 UTC Resolved - Our GitHub sync queues have drained, everything is operating normally. ✨Jul 31, 18:18 UTC Update - We have resumed regular service as GitHub recovers. We’re processing a backlog of GitHub sync requests.Jul 31, 17:32 UTC Monitoring - GitHub is returning to normal service, we’ll continue to monitor the situation.Jul 31, 17:01 UTC Identified - GitHub’s currently experiencing a major service outage, we’re monitoring the situation closely.

Last Update: A few months ago

Delays receiving events from GitHub for both public and private repositories

Jul 31, 18:18 UTC Update - We have resumed regular service as GitHub recovers. We’re processing a backlog of GitHub sync requests.Jul 31, 17:32 UTC Monitoring - GitHub is returning to normal service, we’ll continue to monitor the situation.Jul 31, 17:01 UTC Identified - GitHub’s currently experiencing a major service outage, we’re monitoring the situation closely.

Last Update: A few months ago

Delays receiving events from GitHub for both public and private repositories

Jul 31, 17:32 UTC Monitoring - GitHub is returning to normal service, we’ll continue to monitor the situation.Jul 31, 17:01 UTC Identified - GitHub’s currently experiencing a major service outage, we’re monitoring the situation closely.

Last Update: A few months ago

Delays receiving events from GitHub for both public and private repositories

Jul 31, 17:01 UTC Identified - GitHub’s currently experiencing a major service outage, we’re monitoring the situation closely.

Last Update: A few months ago

Delays for private and open-source container-based builds

Jul 27, 15:46 UTC Resolved - Backlogs have cleared.Jul 27, 15:17 UTC Monitoring - An EC2 network outage impacted our capacity, which has created a backlog for builds in our container-based infrastructure. We are adding capacity in order to work through this backlog more quickly.

Last Update: A few months ago

Delays for private and open-source container-based builds

Jul 27, 15:17 UTC Monitoring - An EC2 network outage impacted our capacity, which has created a backlog for builds in our container-based infrastructure. We are adding capacity in order to work through this backlog more quickly.

Last Update: A few months ago

Delays for private and open-source container-based builds

Jul 27, 15:17 UTC Monitoring - An EC2 network outage impacted our capacity, which has created a backlog for builds in our container-based infrastructure. We are adding capacity in order to work through this backlog more quickly.

Last Update: A few months ago

OSX builds routed to Linux images

Jul 19, 13:01 UTC Resolved - The regression bug that was introduced has now been fixed, builds are routing as intended.Jul 19, 12:04 UTC Monitoring - Due to a regression, some OSX builds have briefly been routed to Linux images. This has change been reverted. We are closely monitoring the situation. Restarting a affected jobs should run the build on the right infrastructure. We apologize for the inconveniences caused.

Last Update: A few months ago

OSX builds routed to Linux images

Jul 19, 12:04 UTC Monitoring - Due to a regression, some OSX builds have briefly been routed to Linux images. This has change been reverted. We are closely monitoring the situation. Restarting a affected jobs should run the build on the right infrastructure. We apologize for the inconveniences caused.

Last Update: A few months ago

Reduced capacity for macOS builds in travis-ci.com and travis-ci.org

Jul 13, 17:54 UTC Resolved - At this time we've cleared the backlog for travis-ci.com builds. There is still a large backlog for travis-ci.org builds, but it's not higher than what we've been seeing this week and it'll clear as usage tapers off later today. So we are resolving this incident.Jul 13, 15:36 UTC Monitoring - The macOS capacity has been restored in travis-ci.com and travis-ci.org. We are closely monitoring the situation and we are processing the macOS build backlog.Jul 13, 15:19 UTC Identified - We have identified the problem and we’re working together with our upstream provider to bring back the capacity to our macOS infrastructure.Jul 13, 14:43 UTC Investigating - We’re currently investigating reduced capacity in our macOS infrastructure. Build delays are expected. Thank you for your patience.

Last Update: A few months ago

Reduced capacity for macOS builds in travis-ci.com and travis-ci.org

Jul 13, 15:36 UTC Monitoring - The macOS capacity has been restored in travis-ci.com and travis-ci.org. We are closely monitoring the situation and we are processing the macOS build backlog.Jul 13, 15:19 UTC Identified - We have identified the problem and we’re working together with our upstream provider to bring back the capacity to our macOS infrastructure.Jul 13, 14:43 UTC Investigating - We’re currently investigating reduced capacity in our macOS infrastructure. Build delays are expected. Thank you for your patience.

Last Update: A few months ago

Reduced capacity for macOS builds in travis-ci.com and travis-ci.org

Jul 13, 15:19 UTC Identified - We have identified the problem and we’re working together with our upstream provider to bring back the capacity to our macOS infrastructure.Jul 13, 14:43 UTC Investigating - We’re currently investigating reduced capacity in our macOS infrastructure. Build delays are expected. Thank you for your patience.

Last Update: A few months ago

Reduced capacity for macOS builds in travis-ci.com and travis-ci.org

Jul 13, 14:43 UTC Investigating - We’re currently investigating reduced capacity in our macOS infrastructure. Build delays are expected. Thank you for your patience.

Last Update: A few months ago

macOS network outage

Jul 13, 02:31 UTC Resolved - We have caught up with the back logs.Jul 12, 18:16 UTC Update - The upstream network issue has been resolved. We are processing back logs.Jul 12, 14:47 UTC Monitoring - The network seems to be back. We’re beginning to process the macOS backlog and and are monitoring the situation closely.Jul 12, 13:10 UTC Update - We’re experiencing another network outage which is interrupting macOS builds. Our upstream provider is investigating as well http://status.macstadium.com/incidents/p584yykj95wnJul 12, 12:55 UTC Identified - We experienced a brief network outage which interrupted all macOS builds. Things seem to be recovering. We are monitoring things closely and working on getting more information about what the cause of the network issue was.

Last Update: A few months ago

macOS network outage

Jul 12, 18:16 UTC Update - The upstream network issue has been resolved. We are processing back logs.Jul 12, 14:47 UTC Monitoring - The network seems to be back. We’re beginning to process the macOS backlog and and are monitoring the situation closely.Jul 12, 13:10 UTC Update - We’re experiencing another network outage which is interrupting macOS builds. Our upstream provider is investigating as well http://status.macstadium.com/incidents/p584yykj95wnJul 12, 12:55 UTC Identified - We experienced a brief network outage which interrupted all macOS builds. Things seem to be recovering. We are monitoring things closely and working on getting more information about what the cause of the network issue was.

Last Update: A few months ago

macOS network outage

Jul 12, 14:47 UTC Monitoring - The network seems to be back. We’re beginning to process the macOS backlog and and are monitoring the situation closely.Jul 12, 13:10 UTC Update - We’re experiencing another network outage which is interrupting macOS builds. Our upstream provider is investigating as well http://status.macstadium.com/incidents/p584yykj95wnJul 12, 12:55 UTC Identified - We experienced a brief network outage which interrupted all macOS builds. Things seem to be recovering. We are monitoring things closely and working on getting more information about what the cause of the network issue was.

Last Update: A few months ago

macOS network outage

Jul 12, 13:10 UTC Update - We’re experiencing another network outage which is interrupting macOS builds. Our upstream provider is investigating as well http://status.macstadium.com/incidents/p584yykj95wnJul 12, 12:55 UTC Identified - We experienced a brief network outage which interrupted all macOS builds. Things seem to be recovering. We are monitoring things closely and working on getting more information about what the cause of the network issue was.

Last Update: A few months ago

macOS network outage

Jul 12, 12:55 UTC Identified - We experienced a brief network outage which interrupted all macOS builds. Things seem to be recovering. We are monitoring things closely and working on getting more information about what the cause of the network issue was.

Last Update: A few months ago

macOS network outage

Jul 12, 12:55 UTC Identified - We experienced a brief network outage which interrupted all macOS builds. Things seem to be recovering. We are monitoring things closely and working on getting more information about what the cause of the network issue was.

Last Update: A few months ago

apt-get failures due to outdated GPG key

Jun 29, 23:20 UTC Resolved - The issue has been resolved.Jun 29, 23:06 UTC Monitoring - A hot fix has been deployed to remove the offending apt source. If your build does not need this, `apt-get` commands should now succeed. See https://github.com/travis-ci/travis-ci/issues/8002 for further details.Jun 29, 22:59 UTC Update - We identified an apt source that is missing a GPG key. We will remove this source as an emergency measure to remedy the apt-get failures.Jun 29, 22:42 UTC Identified - We believe we've identified the source of the issue and are working on a fix.Jun 29, 22:23 UTC Investigating - `apt-get` commands are failing due to a missing GPG key. We are investigating.

Last Update: A few months ago

apt-get failures due to outdated GPG key

Jun 29, 23:06 UTC Monitoring - A hot fix has been deployed to remove the offending apt source. If your build does not need this, `apt-get` commands should now succeed. See https://github.com/travis-ci/travis-ci/issues/8002 for further details.Jun 29, 22:59 UTC Update - We identified an apt source that is missing a GPG key. We will remove this source as an emergency measure to remedy the apt-get failures.Jun 29, 22:42 UTC Identified - We believe we've identified the source of the issue and are working on a fix.Jun 29, 22:23 UTC Investigating - `apt-get` commands are failing due to a missing GPG key. We are investigating.

Last Update: A few months ago

apt-get failures due to outdated GPG key

Jun 29, 22:59 UTC Update - We identified an apt source that is missing a GPG key. We will remove this source as an emergency measure to remedy the apt-get failures.Jun 29, 22:42 UTC Identified - We believe we've identified the source of the issue and are working on a fix.Jun 29, 22:23 UTC Investigating - `apt-get` commands are failing due to a missing GPG key. We are investigating.

Last Update: A few months ago

apt-get failures due to outdated GPG key

Jun 29, 22:42 UTC Identified - We believe we've identified the source of the issue and are working on a fix.Jun 29, 22:23 UTC Investigating - `apt-get` commands are failing due to a missing GPG key. We are investigating.

Last Update: A few months ago

apt-get failures due to outdated GPG key

Jun 29, 22:23 UTC Investigating - `apt-get` commands are failing due to a missing GPG key. We are investigating.

Last Update: A few months ago

Delays starting builds for public repositories

Jun 29, 16:13 UTC Resolved - Linux build backlogs have cleared. Builds are processing normally.Jun 29, 15:16 UTC Monitoring - Builds affected by this issue are now slowly being processed. We’ll continue posting updates on their evolution.Jun 29, 14:43 UTC Identified - We’ve identified an issue with one of our backend applications, that was causing a delay scheduling builds for public repositories. We’ve just fixed this issue and are currently working on scheduling the builds affected by this issue.

Last Update: A few months ago

Delays starting builds for public repositories

Jun 29, 15:16 UTC Monitoring - Builds affected by this issue are now slowly being processed. We’ll continue posting updates on their evolution.Jun 29, 14:43 UTC Identified - We’ve identified an issue with one of our backend applications, that was causing a delay scheduling builds for public repositories. We’ve just fixed this issue and are currently working on scheduling the builds affected by this issue.

Last Update: A few months ago

Delays starting builds for public repositories

Jun 29, 14:43 UTC Identified - We’ve identified an issue with one of our backend applications, that was causing a delay scheduling builds for public repositories. We’ve just fixed this issue and are currently working on scheduling the builds affected by this issue.

Last Update: A few months ago

Delay processing builds public repositories

Jun 29, 13:59 UTC Resolved - The increased backlog has been processed.Jun 29, 13:57 UTC Update - The backlogs have calmed down. We are expecting the backlog of the container Precise builds to be processed in 15 minutes.Jun 29, 13:19 UTC Monitoring - The cause of the delay has been removed. We are monitoring the situation while the accrued backlog is processed.Jun 29, 13:12 UTC Investigating - We are investigating a delay in scheduling build requests for open source builds.

Last Update: A few months ago

Delay processing builds public repositories

Jun 29, 13:57 UTC Update - The backlogs have calmed down. We are expecting the backlog of the container Precise builds to be processed in 15 minutes.Jun 29, 13:19 UTC Monitoring - The cause of the delay has been removed. We are monitoring the situation while the accrued backlog is processed.Jun 29, 13:12 UTC Investigating - We are investigating a delay in scheduling build requests for open source builds.

Last Update: A few months ago

Delay processing builds public repositories

Jun 29, 13:19 UTC Monitoring - The cause of the delay has been removed. We are monitoring the situation while the accrued backlog is processed.Jun 29, 13:12 UTC Investigating - We are investigating a delay in scheduling build requests for open source builds.

Last Update: A few months ago

Delay processing builds public repositories

Jun 29, 13:12 UTC Investigating - We are investigating a delay in scheduling build requests for open source builds.

Last Update: A few months ago

Assets not loading on the web frontend

Jun 28, 16:36 UTC Resolved - Our upstream CDN provider has resolved the issue! 🎉Jun 28, 14:21 UTC Monitoring - Our upstream CDN provider has implemented a fix. Our service has recovered. We are monitoring the situation.Jun 28, 14:13 UTC Update - Our upstream CDN provider is working on a fix, and we are seeing some recovery of service.Jun 28, 14:07 UTC Identified - We are investigating reports of assets not loading on our website in some regions. Our upstream CDN provider is aware of the issue, and we are investigating the possibility of a workaround. The web UI is only partially available at this time.

Last Update: A few months ago

Assets not loading on the web frontend

Jun 28, 14:21 UTC Monitoring - Our upstream CDN provider has implemented a fix. Our service has recovered. We are monitoring the situation.Jun 28, 14:13 UTC Update - Our upstream CDN provider is working on a fix, and we are seeing some recovery of service.Jun 28, 14:07 UTC Identified - We are investigating reports of assets not loading on our website in some regions. Our upstream CDN provider is aware of the issue, and we are investigating the possibility of a workaround. The web UI is only partially available at this time.

Last Update: A few months ago

Assets not loading on the web frontend

Jun 28, 14:13 UTC Update - Our upstream CDN provider is working on a fix, and we are seeing some recovery of service.Jun 28, 14:07 UTC Identified - We are investigating reports of assets not loading on our website in some regions. Our upstream CDN provider is aware of the issue, and we are investigating the possibility of a workaround. The web UI is only partially available at this time.

Last Update: A few months ago

Assets not loading on the web frontend

Jun 28, 14:07 UTC Identified - We are investigating reports of assets not loading on our website in some regions. Our upstream CDN provider is aware of the issue, and we are investigating the possibility of a workaround. The web UI is only partially available at this time.

Last Update: A few months ago

Apt failures

Jun 28, 12:36 UTC Resolved - The issue has been identified and a hotfix is in place. We are monitoring the situation, builds should be running normally again (a restart should do the trick). Please reach out to support@travis-ci.com if you continue to see apt-get failures.Jun 28, 10:43 UTC Investigating - We are seeing issues with apt-get on our images. We are investigating, and let you know when we know more.

Last Update: A few months ago

Apt failures

Jun 28, 10:43 UTC Investigating - We are seeing issues with apt-get on our images. We are investigating, and let you know when we know more.

Last Update: A few months ago

Apt failures

Jun 28, 10:43 UTC Investigating - We are seeing issues with apt-get on our images. We are investigating, and let you know when we know more.

Last Update: A few months ago

Private Repository RabbitMQ Upgrade

Jun 23, 02:15 UTC Completed - The scheduled maintenance has been completed.Jun 23, 01:53 UTC Verifying - Verification is currently underway for the maintenance items.Jun 23, 01:30 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 23:49 UTC Scheduled - We need to perform an upgrade to the RabbitMQ cluster used by the infrastructure for private repositories. We do not expect any downtime during this upgrade.

Last Update: A few months ago

Private Repository RabbitMQ Upgrade

Jun 23, 01:53 UTC Verifying - Verification is currently underway for the maintenance items.Jun 23, 01:30 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 23:49 UTC Scheduled - We need to perform an upgrade to the RabbitMQ cluster used by the infrastructure for private repositories. We do not expect any downtime during this upgrade.

Last Update: A few months ago

Private Repository RabbitMQ Upgrade

Jun 23, 01:30 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 23:49 UTC Scheduled - We need to perform an upgrade to the RabbitMQ cluster used by the infrastructure for private repositories. We do not expect any downtime during this upgrade.

Last Update: A few months ago

Private Repository RabbitMQ Upgrade

Jun 22, 23:49 UTC Scheduled - We need to perform an upgrade to the RabbitMQ cluster used by the infrastructure for private repositories. We do not expect any downtime during this upgrade.

Last Update: A few months ago

Delays processing build logs

Jun 22, 22:16 UTC Resolved - Jobs and logs are processing normally on travis-ci.com.Jun 22, 22:00 UTC Monitoring - Some message queue instability on the ".com" infrastructure has resulted in delays for build log updates and backlog of job execution. We're now processing jobs and logs as expected. We will continue monitoring queues.Jun 22, 21:04 UTC Investigating - We are currently investigating delays processing build logs.

Last Update: A few months ago

Delays processing build logs

Jun 22, 22:00 UTC Monitoring - Some message queue instability on the ".com" infrastructure has resulted in delays for build log updates and backlog of job execution. We're now processing jobs and logs as expected. We will continue monitoring queues.Jun 22, 21:04 UTC Investigating - We are currently investigating delays processing build logs.

Last Update: A few months ago

Delays processing build logs

Jun 22, 21:04 UTC Investigating - We are currently investigating delays processing build logs.

Last Update: A few months ago

Delays processing build logs

Jun 22, 21:04 UTC Investigating - We are currently investigating delays processing build logs.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 13:53 UTC Resolved - Log processing has recovered. We are investigating the possibility of some stuck log parts. If you do experience paused or stuck logs, please restart those builds. Thank you for your patience! 💛Jun 22, 13:24 UTC Update - The backlog for private Mac builds has cleared. We continue to process our backlog of log parts.Jun 22, 12:36 UTC Update - Job backlogs for private linux builds have cleared. We continue to process the Mac builds, as well as the log parts.Jun 22, 12:16 UTC Monitoring - Job processing is recovering. We are bringing up extra capacity to process the job backlogs more quickly. We are also processing our backlog of log parts. We're continue to closely monitor the situation.Jun 22, 11:51 UTC Identified - We have identified a correlating issue with one of our RabbitMQ instances. The faulty instance has been restarted, and we are waiting for capacity to fully recover. This has created job delays and backlogs for private builds on all infrastructures.Jun 22, 11:27 UTC Update - We’ve manually bumped our Linux capacity to help private Linux builds process faster while we continue working on logs processing.Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 13:24 UTC Update - The backlog for private Mac builds has cleared. We continue to process our backlog of log parts.Jun 22, 12:36 UTC Update - Job backlogs for private linux builds have cleared. We continue to process the Mac builds, as well as the log parts.Jun 22, 12:16 UTC Monitoring - Job processing is recovering. We are bringing up extra capacity to process the job backlogs more quickly. We are also processing our backlog of log parts. We're continue to closely monitor the situation.Jun 22, 11:51 UTC Identified - We have identified a correlating issue with one of our RabbitMQ instances. The faulty instance has been restarted, and we are waiting for capacity to fully recover. This has created job delays and backlogs for private builds on all infrastructures.Jun 22, 11:27 UTC Update - We’ve manually bumped our Linux capacity to help private Linux builds process faster while we continue working on logs processing.Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 12:36 UTC Update - Job backlogs for private linux builds have cleared. We continue to process the Mac builds, as well as the log parts.Jun 22, 12:16 UTC Monitoring - Job processing is recovering. We are bringing up extra capacity to process the job backlogs more quickly. We are also processing our backlog of log parts. We're continue to closely monitor the situation.Jun 22, 11:51 UTC Identified - We have identified a correlating issue with one of our RabbitMQ instances. The faulty instance has been restarted, and we are waiting for capacity to fully recover. This has created job delays and backlogs for private builds on all infrastructures.Jun 22, 11:27 UTC Update - We’ve manually bumped our Linux capacity to help private Linux builds process faster while we continue working on logs processing.Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 12:16 UTC Monitoring - Job processing is recovering. We are bringing up extra capacity to process the job backlogs more quickly. We are also processing our backlog of log parts. We're continue to closely monitor the situation.Jun 22, 11:51 UTC Identified - We have identified a correlating issue with one of our RabbitMQ instances. The faulty instance has been restarted, and we are waiting for capacity to fully recover. This has created job delays and backlogs for private builds on all infrastructures.Jun 22, 11:27 UTC Update - We’ve manually bumped our Linux capacity to help private Linux builds process faster while we continue working on logs processing.Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 11:51 UTC Identified - We have identified a correlating issue with one of our RabbitMQ instances. The faulty instance has been restarted, and we are waiting for capacity to fully recover. This has created job delays and backlogs for private builds on all infrastructures.Jun 22, 11:27 UTC Update - We’ve manually bumped our Linux capacity to help private Linux builds process faster while we continue working on logs processing.Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 11:27 UTC Update - We’ve manually bumped our Linux capacity to help private Linux builds process faster while we continue working on logs processing.Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 11:03 UTC Monitoring - We’ve identified and fixed a memory issue caused by a high spike in our sync queues. Some build request have been lost during this process, please re-push a commit again to ensure your build is triggered. We’re monitoring and builds should be running as expected from now on. We’re also working on processing logs and will give an update as soon as they are at 100%.Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Build delays and API issues affecting travis-ci.com

Jun 22, 10:43 UTC Investigating - We’re investigating API and sync connection issues, causing build delays for private repositories, log in issues and logs display and retrieval issues.

Last Update: A few months ago

Backlog for macOS and sudo-enabled Linux

Jun 18, 14:02 UTC Resolved - The current macOS backlog for public repositories is at a normal level and should be cleared within the next hour. Thank you for your patience!Jun 18, 13:30 UTC Monitoring - The last remaining backlog is for macOS public repositories.Jun 18, 12:08 UTC Identified - We have rolled back an update that was bundled with last night's maintenance and are already seeing a decline in backlogs.Jun 18, 11:37 UTC Investigating - We are investigating backlogs on macOS and sudo-enabled Linux for both public and private repositories

Last Update: A few months ago

Backlog for macOS and sudo-enabled Linux

Jun 18, 13:30 UTC Monitoring - The last remaining backlog is for macOS public repositories.Jun 18, 12:08 UTC Identified - We have rolled back an update that was bundled with last night's maintenance and are already seeing a decline in backlogs.Jun 18, 11:37 UTC Investigating - We are investigating backlogs on macOS and sudo-enabled Linux for both public and private repositories

Last Update: A few months ago

Backlog for macOS and sudo-enabled Linux

Jun 18, 12:08 UTC Identified - We have rolled back an update that was bundled with last night's maintenance and are already seeing a decline in backlogs.Jun 18, 11:37 UTC Investigating - We are investigating backlogs on macOS and sudo-enabled Linux for both public and private repositories

Last Update: A few months ago

Backlog for macOS and sudo-enabled Linux

Jun 18, 11:37 UTC Investigating - We are investigating backlogs on macOS and sudo-enabled Linux for both public and private repositories

Last Update: A few months ago

Database migrations for two backend services

Jun 18, 03:57 UTC Completed - The scheduled maintenance has been completed.Jun 18, 03:49 UTC Verifying - We are in the process of verifying the maintenance, currently investigating heightened AMQP errors on sudo-enabled Linux.Jun 18, 02:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 14, 20:34 UTC Scheduled - We need to migrate two backend services to use newly-provisioned databases so that we can decommission an older, larger, shared database instance. Public and private repositories running on macOS and sudo-enabled Linux will be affected, with no new jobs scheduled while maintenance is underway.

Last Update: A few months ago

Database migrations for two backend services

Jun 18, 03:49 UTC Verifying - We are in the process of verifying the maintenance, currently investigating heightened AMQP errors on sudo-enabled Linux.Jun 18, 02:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 14, 20:34 UTC Scheduled - We need to migrate two backend services to use newly-provisioned databases so that we can decommission an older, larger, shared database instance. Public and private repositories running on macOS and sudo-enabled Linux will be affected, with no new jobs scheduled while maintenance is underway.

Last Update: A few months ago

Database migrations for two backend services

Jun 18, 02:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 14, 20:34 UTC Scheduled - We need to migrate two backend services to use newly-provisioned databases so that we can decommission an older, larger, shared database instance. Public and private repositories running on macOS and sudo-enabled Linux will be affected, with no new jobs scheduled while maintenance is underway.

Last Update: A few months ago

Build delays for private repositories

Jun 14, 22:09 UTC Resolved - The backlog for sudo: required GCE and macOS private builds has cleared.Jun 14, 21:43 UTC Update - Due to resource contention on one of our backend services, the backlog for private mac and sudo enabled builds is taking longer-than-normal to process. Builds are processing at full capacity, and the backlog continues to decrease.Jun 14, 20:03 UTC Update - Private travis-ci.com builds for mac and sudo-enabled trusty/precise are running at full capacity, thank you for your patience as we work through the remaining backlogJun 14, 19:21 UTC Update - The backlog on our container-based Precise infrastructure (i.e. sudo: false + dist: precise) is now cleared.Jun 14, 19:17 UTC Update - The backlog on our container-based Trusty infrastructure (i.e. sudo: false + dist: trusty) is now cleared.Jun 14, 19:15 UTC Monitoring - We are seeing a downward trend in the backlogs of all our infrastructures. We will update when they have cleared. Thank you.Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for private repositories

Jun 14, 21:43 UTC Update - Due to resource contention on one of our backend services, the backlog for private mac and sudo enabled builds is taking longer-than-normal to process. Builds are processing at full capacity, and the backlog continues to decrease.Jun 14, 20:03 UTC Update - Private travis-ci.com builds for mac and sudo-enabled trusty/precise are running at full capacity, thank you for your patience as we work through the remaining backlogJun 14, 19:21 UTC Update - The backlog on our container-based Precise infrastructure (i.e. sudo: false + dist: precise) is now cleared.Jun 14, 19:17 UTC Update - The backlog on our container-based Trusty infrastructure (i.e. sudo: false + dist: trusty) is now cleared.Jun 14, 19:15 UTC Monitoring - We are seeing a downward trend in the backlogs of all our infrastructures. We will update when they have cleared. Thank you.Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Database migrations for two backend services

Jun 14, 20:34 UTC Scheduled - We need to migrate two backend services to use newly-provisioned databases so that we can decommission an older, larger, shared database instance. Public and private repositories running on macOS and sudo-enabled Linux will be affected, with no new jobs scheduled while maintenance is underway.

Last Update: A few months ago

Build delays for private repositories

Jun 14, 20:03 UTC Update - Private travis-ci.com builds for mac and sudo-enabled trusty/precise are running at full capacity, thank you for your patience as we work through the remaining backlogJun 14, 19:21 UTC Update - The backlog on our container-based Precise infrastructure (i.e. sudo: false + dist: precise) is now cleared.Jun 14, 19:17 UTC Update - The backlog on our container-based Trusty infrastructure (i.e. sudo: false + dist: trusty) is now cleared.Jun 14, 19:15 UTC Monitoring - We are seeing a downward trend in the backlogs of all our infrastructures. We will update when they have cleared. Thank you.Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for private repositories

Jun 14, 19:21 UTC Update - The backlog on our container-based Precise infrastructure (i.e. sudo: false + dist: precise) is now cleared.Jun 14, 19:17 UTC Update - The backlog on our container-based Trusty infrastructure (i.e. sudo: false + dist: trusty) is now cleared.Jun 14, 19:15 UTC Monitoring - We are seeing a downward trend in the backlogs of all our infrastructures. We will update when they have cleared. Thank you.Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for private repositories

Jun 14, 19:17 UTC Update - The backlog on our container-based Trusty infrastructure (i.e. sudo: false + dist: trusty) is now cleared.Jun 14, 19:15 UTC Monitoring - We are seeing a downward trend in the backlogs of all our infrastructures. We will update when they have cleared. Thank you.Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for private repositories

Jun 14, 19:15 UTC Monitoring - We are seeing a downward trend in the backlogs of all our infrastructures. We will update when they have cleared. Thank you.Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for private repositories

Jun 14, 18:48 UTC Identified - We have proceeded to restart a component that was failing to process job requests. Upon the reboot, the jobs now seem to be processing normally. We are monitoring the situation closely.Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for private repositories

Jun 14, 18:25 UTC Investigating - We are currently seeing delays for builds on private repositories. We are currently escalating the issue with one of our 3rd party provider and will post an update as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Database upgrade on travis-ci.com

Jun 2, 06:40 UTC Completed - The maintenance is complete, thanks for bearing with us! 💛Jun 2, 06:00 UTC In progress - We are beginning our scheduled maintenance on travis-ci.com.Jun 1, 16:31 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.com on Friday, June 2, 2017 From 07.00 to 08.00 AM UTC . We expect the travis-ci.com API and web interface to be unavailable for some of that time window. Processing of private builds is also expected to be delayed. Open-source builds (travis-ci.org) are unaffected by this maintenance.

Last Update: A few months ago

Database upgrade on travis-ci.com

Jun 2, 06:00 UTC In progress - We are beginning our scheduled maintenance on travis-ci.com.Jun 1, 16:31 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.com on Friday, June 2, 2017 From 07.00 to 08.00 AM UTC . We expect the travis-ci.com API and web interface to be unavailable for some of that time window. Processing of private builds is also expected to be delayed. Open-source builds (travis-ci.org) are unaffected by this maintenance.

Last Update: A few months ago

Database upgrade on travis-ci.com

Jun 1, 16:31 UTC Scheduled - We are performing some scheduled maintenance on travis-ci.com on Friday, June 2, 2017 From 07.00 to 08.00 AM UTC . We expect the travis-ci.com API and web interface to be unavailable for some of that time window. Processing of private builds is also expected to be delayed. Open-source builds (travis-ci.org) are unaffected by this maintenance.

Last Update: A few months ago

travis-ci.com partially unavailable

Jun 1, 03:07 UTC Resolved - Full operation has been restored. Part of the resolution required purging the automatic daily GitHub sync queue backlog. Manual GitHub sync remains available, and automatic daily GitHub sync will trigger again within the next 18 hours. Thank you for your patience!Jun 1, 01:23 UTC Update - Most of the GitHub sync queues have caught up, with the exception of automatic daily sync. Overall database load remains higher than usual while working through the backlog. We are planning to address this with some changes to database indexes within the next day.May 31, 18:36 UTC Update - We have resumed GitHub syncing at reduced scale.May 31, 16:15 UTC Update - GitHub syncing has been temporarily disabled while we stabilize things.May 31, 15:42 UTC Monitoring - Our API service has recovered and is operating normally. We are continuing to monitor the issue.May 31, 15:27 UTC Investigating - Travis CI for private projects (https://travis-ci.com) is currently partially unavailable as our API is currently seeing an elevated number of errors.

Last Update: A few months ago

travis-ci.com partially unavailable

Jun 1, 01:23 UTC Update - Most of the GitHub sync queues have caught up, with the exception of automatic daily sync. Overall database load remains higher than usual while working through the backlog. We are planning to address this with some changes to database indexes within the next day.May 31, 18:36 UTC Update - We have resumed GitHub syncing at reduced scale.May 31, 16:15 UTC Update - GitHub syncing has been temporarily disabled while we stabilize things.May 31, 15:42 UTC Monitoring - Our API service has recovered and is operating normally. We are continuing to monitor the issue.May 31, 15:27 UTC Investigating - Travis CI for private projects (https://travis-ci.com) is currently partially unavailable as our API is currently seeing an elevated number of errors.

Last Update: A few months ago

travis-ci.com partially unavailable

May 31, 18:36 UTC Update - We have resumed GitHub syncing at reduced scale.May 31, 16:15 UTC Update - GitHub syncing has been temporarily disabled while we stabilize things.May 31, 15:42 UTC Monitoring - Our API service has recovered and is operating normally. We are continuing to monitor the issue.May 31, 15:27 UTC Investigating - Travis CI for private projects (https://travis-ci.com) is currently partially unavailable as our API is currently seeing an elevated number of errors.

Last Update: A few months ago

travis-ci.com partially unavailable

May 31, 16:15 UTC Update - GitHub syncing has been temporarily disabled while we stabilize things.May 31, 15:42 UTC Monitoring - Our API service has recovered and is operating normally. We are continuing to monitor the issue.May 31, 15:27 UTC Investigating - Travis CI for private projects (https://travis-ci.com) is currently partially unavailable as our API is currently seeing an elevated number of errors.

Last Update: A few months ago

Build delays - GitHub API Latency

May 31, 16:13 UTC Resolved - Open-source backlogs have been processed, builds are processing normally.May 31, 15:13 UTC Update - Upstream has recovered, and we have completed processing of our backlogs for incoming builds and github status updates. Private builds should no longer see any delays. We are working through the job backlog for open-source builds, which are still experiencing some delays. Thanks for your patience! 💛May 31, 14:28 UTC Update - We are still seeing elevated error levels, and we are scaling out capacity to work through the backlog more quickly.May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to GitHub api latency. Short wait times for builds and delayed notifications are expected.

Last Update: A few months ago

travis-ci.com partially unavailable

May 31, 15:42 UTC Monitoring - Our API service has recovered and is operating normally. We are continuing to monitor the issue.May 31, 15:27 UTC Investigating - Travis CI for private projects (https://travis-ci.com) is currently partially unavailable as our API is currently seeing an elevated number of errors.

Last Update: A few months ago

travis-ci.com partially unavailable

May 31, 15:27 UTC Investigating - Travis CI for private projects (https://travis-ci.com) is currently partially unavailable as our API is currently seeing an elevated number of errors.

Last Update: A few months ago

Build delays - GitHub API Latency

May 31, 15:13 UTC Update - Upstream has recovered, and we have completed processing of our backlogs for incoming builds and github status updates. Private builds should no longer see any delays. We are working through the job backlog for open-source builds, which are still experiencing some delays. Thanks for your patience! 💛May 31, 14:28 UTC Update - We are still seeing elevated error levels, and we are scaling out capacity to work through the backlog more quickly.May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to GitHub api latency. Short wait times for builds and delayed notifications are expected.

Last Update: A few months ago

Build delays - GitHub API Latency

May 31, 14:28 UTC Update - We are still seeing elevated error levels, and we are scaling out capacity to work through the backlog more quickly.May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to GitHub api latency. Short wait times for builds and delayed notifications are expected.

Last Update: A few months ago

Build delays - GitHub API Latency

May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to GitHub api latency. Short wait times for builds and delayed notifications are expected.

Last Update: A few months ago

Build delays for open source builds

May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to GitHub api latency. Short wait times for builds and delayed notifications are expected.

Last Update: A few months ago

Build delays for open source builds

May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to GitHub api latency. Short wait times for builds are expected.

Last Update: A few months ago

Delays for `sudo: required` builds on both .com and .org

May 31, 13:28 UTC Resolved - The network error rates have returned to normal, low levels, despite the fact that we have yet to identify the contributing factors with the help of Google support. Thank you again for your patience.May 31, 12:12 UTC Update - We are still working with Google Cloud Engine to get to the source of SSH Timeouts, users may continue to experience longer-than-normal wait times for sudo required builds.May 30, 23:13 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. We will publish another update when new information is available. Thank you again for your patience.May 30, 21:26 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. Thank you for your patience!May 30, 18:19 UTC Investigating - We are currently seeing an elevated number of `sudo: required` builds getting re-queued on GCE which is causing delays affecting both private and public builds. We are escalating the issue with Google's support. We will post an update when we know more. Thank you for your patience!

Last Update: A few months ago

Build delays for open source builds

May 31, 13:27 UTC Monitoring - Please bear with us as we scale out for demand due to github api latency. Short wait times for builds on our open source .org are expected.

Last Update: A few months ago

Delays for `sudo: required` builds on both .com and .org

May 31, 12:12 UTC Update - We are still working with Google Cloud Engine to get to the source of SSH Timeouts, users may continue to experience longer-than-normal wait times for sudo required builds.May 30, 23:13 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. We will publish another update when new information is available. Thank you again for your patience.May 30, 21:26 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. Thank you for your patience!May 30, 18:19 UTC Investigating - We are currently seeing an elevated number of `sudo: required` builds getting re-queued on GCE which is causing delays affecting both private and public builds. We are escalating the issue with Google's support. We will post an update when we know more. Thank you for your patience!

Last Update: A few months ago

Delays for `sudo: required` builds on both .com and .org

May 30, 23:13 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. We will publish another update when new information is available. Thank you again for your patience.May 30, 21:26 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. Thank you for your patience!May 30, 18:19 UTC Investigating - We are currently seeing an elevated number of `sudo: required` builds getting re-queued on GCE which is causing delays affecting both private and public builds. We are escalating the issue with Google's support. We will post an update when we know more. Thank you for your patience!

Last Update: A few months ago

Delays for `sudo: required` builds on both .com and .org

May 30, 23:13 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. We will publish another update when new information is available. Thank you again for your patience.May 30, 21:26 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. Thank you for your patience!May 30, 18:19 UTC Investigating - We are currently seeing an elevated number of `sudo: required` builds getting re-queued on GCE which is causing delays affecting both private and public builds. We are escalating the issue with Google's support. We will post an update when we know more. Thank you for your patience!

Last Update: A few months ago

Delays for `sudo: required` builds on both .com and .org

May 30, 21:26 UTC Update - We are continuing to work with Google support to identify the factors contributing to SSH timeouts. Thank you for your patience!May 30, 18:19 UTC Investigating - We are currently seeing an elevated number of `sudo: required` builds getting re-queued on GCE which is causing delays affecting both private and public builds. We are escalating the issue with Google's support. We will post an update when we know more. Thank you for your patience!

Last Update: A few months ago

Delays for `sudo: required` builds on both .com and .org

May 30, 18:19 UTC Investigating - We are currently seeing an elevated number of `sudo: required` builds getting re-queued on GCE which is causing delays affecting both private and public builds. We are escalating the issue with Google's support. We will post an update when we know more. Thank you for your patience!

Last Update: A few months ago

Logs delayed for .com builds

May 22, 23:55 UTC Resolved - Logs are now processing normally. Thanks for your patience!May 22, 23:48 UTC Investigating - We are investigating a delay in processing logs for paid, closed-source builds.

Last Update: A few months ago

Logs delayed for .com builds

May 22, 23:48 UTC Investigating - We are investigating a delay in processing logs for paid, closed-source builds.

Last Update: A few months ago

macOS Infrastructure Network Improvements

May 21, 20:51 UTC Completed - The network maintenance is complete. Thank you for your patience, and happy building!May 21, 20:37 UTC Verifying - The infrastructure provider has installed the HA pair, and we are checking to make sure that everything is still working as intended.May 21, 20:12 UTC In progress - macOS builds have been halted in preparation for network maintenance. We are now waiting on our infrastructure provider to install the HA router pair.May 21, 20:03 UTC Update - macOS builds will be halted shortly in preparation for the network maintenance on the macOS build infrastructure.May 16, 01:32 UTC Scheduled - Job processing on the macOS will be stopped for a time in order for us to install a high-availability router pair in place of our single router. This will allow us to provide more stability going forward.

Last Update: A few months ago

macOS Infrastructure Network Improvements

May 21, 20:37 UTC Verifying - The infrastructure provider has installed the HA pair, and we are checking to make sure that everything is still working as intended.May 21, 20:12 UTC In progress - macOS builds have been halted in preparation for network maintenance. We are now waiting on our infrastructure provider to install the HA router pair.May 21, 20:03 UTC Update - macOS builds will be halted shortly in preparation for the network maintenance on the macOS build infrastructure.May 16, 01:32 UTC Scheduled - Job processing on the macOS will be stopped for a time in order for us to install a high-availability router pair in place of our single router. This will allow us to provide more stability going forward.

Last Update: A few months ago

macOS Infrastructure Network Improvements

May 21, 20:12 UTC In progress - macOS builds have been halted in preparation for network maintenance. We are now waiting on our infrastructure provider to install the HA router pair.May 21, 20:03 UTC Update - macOS builds will be halted shortly in preparation for the network maintenance on the macOS build infrastructure.May 16, 01:32 UTC Scheduled - Job processing on the macOS will be stopped for a time in order for us to install a high-availability router pair in place of our single router. This will allow us to provide more stability going forward.

Last Update: A few months ago

Private builds aren't starting properly

May 18, 20:55 UTC Resolved - Jobs and logs are processing normally.May 18, 20:17 UTC Monitoring - We are happy to report that there's no backlog on our sudo-enabled infrastructure (i.e. GCE) at this time. There are still backlogs on our container-based and Mac infrastructures.May 18, 20:04 UTC Update - We are restarting to process builds and are seeing the backlog going down. We are monitoring to see that everything is back to normal. Thank you!May 18, 19:45 UTC Identified - We've identified that a network issue of some kind interrupted connections to our RabbitMQ cluster and we're in the process of restarting backend services that have been left in an error state due to the interruption. We'll provide another update as we confirm we're processing builds properly again.May 18, 19:11 UTC Investigating - We are seeing reports of builds not starting for private repositories. We are currently looking into it. Thank you for your patience!

Last Update: A few months ago

Private builds aren't starting properly

May 18, 20:17 UTC Monitoring - We are happy to report that there's no backlog on our sudo-enabled infrastructure (i.e. GCE) at this time. There are still backlogs on our container-based and Mac infrastructures.May 18, 20:04 UTC Update - We are restarting to process builds and are seeing the backlog going down. We are monitoring to see that everything is back to normal. Thank you!May 18, 19:45 UTC Identified - We've identified that a network issue of some kind interrupted connections to our RabbitMQ cluster and we're in the process of restarting backend services that have been left in an error state due to the interruption. We'll provide another update as we confirm we're processing builds properly again.May 18, 19:11 UTC Investigating - We are seeing reports of builds not starting for private repositories. We are currently looking into it. Thank you for your patience!

Last Update: A few months ago

Private builds aren't starting properly

May 18, 20:04 UTC Update - We are restarting to process builds and are seeing the backlog going down. We are monitoring to see that everything is back to normal. Thank you!May 18, 19:45 UTC Identified - We've identified that a network issue of some kind interrupted connections to our RabbitMQ cluster and we're in the process of restarting backend services that have been left in an error state due to the interruption. We'll provide another update as we confirm we're processing builds properly again.May 18, 19:11 UTC Investigating - We are seeing reports of builds not starting for private repositories. We are currently looking into it. Thank you for your patience!

Last Update: A few months ago

Private builds aren't starting properly

May 18, 19:45 UTC Identified - We've identified that a network issue of some kind interrupted connections to our RabbitMQ cluster and we're in the process of restarting backend services that have been left in an error state due to the interruption. We'll provide another update as we confirm we're processing builds properly again.May 18, 19:11 UTC Investigating - We are seeing reports of builds not starting for private repositories. We are currently looking into it. Thank you for your patience!

Last Update: A few months ago

Private builds aren't starting properly

May 18, 19:11 UTC Investigating - We are seeing reports of builds not starting for private repositories. We are currently looking into it. Thank you for your patience!

Last Update: A few months ago

Private builds aren't starting properly

May 18, 19:11 UTC Investigating - We are seeing reports of builds not starting for private repositories. We are currently looking into it. Thank you for your patience!

Last Update: A few months ago

macOS Infrastructure Network Improvements

May 16, 01:32 UTC Scheduled - Job processing on the macOS will be stopped for a time in order for us to install a high-availability router pair in place of our single router. This will allow us to provide more stability going forward.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 22:24 UTC Resolved - This incident has been resolved.May 8, 21:17 UTC Update - Backlogs are clear on all Linux queues. We are still working through a backlog for Mac jobs.May 8, 20:40 UTC Update - Backlogs are clear on all container-based Linux queues. We have brought additional capacity online for sudo-enabled Linux. Backlogs remain on: - Mac - sudo-enabled Precise and TrustyMay 8, 19:59 UTC Monitoring - All infrastructure are now processing builds. The container-based infrastructure running Precise doesn't have a backlog right now. A backlog remains, however, for the following infrastructures: - Mac - container-based infrastructure (Trusty) - sudo-enabled Precise and Trusty We will give another update on the state of the backlog for these infrastructures soon.May 8, 19:36 UTC Update - Our RabbitMQ cluster has been restored. We are now restarting our workers to start processing builds again.May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We thank you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 21:17 UTC Update - Backlogs are clear on all Linux queues. We are still working through a backlog for Mac jobs.May 8, 20:40 UTC Update - Backlogs are clear on all container-based Linux queues. We have brought additional capacity online for sudo-enabled Linux. Backlogs remain on: - Mac - sudo-enabled Precise and TrustyMay 8, 19:59 UTC Monitoring - All infrastructure are now processing builds. The container-based infrastructure running Precise doesn't have a backlog right now. A backlog remains, however, for the following infrastructures: - Mac - container-based infrastructure (Trusty) - sudo-enabled Precise and Trusty We will give another update on the state of the backlog for these infrastructures soon.May 8, 19:36 UTC Update - Our RabbitMQ cluster has been restored. We are now restarting our workers to start processing builds again.May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We thank you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 20:40 UTC Update - Backlogs are clear on all container-based Linux queues. We have brought additional capacity online for sudo-enabled Linux. Backlogs remain on: - Mac - sudo-enabled Precise and TrustyMay 8, 19:59 UTC Monitoring - All infrastructure are now processing builds. The container-based infrastructure running Precise doesn't have a backlog right now. A backlog remains, however, for the following infrastructures: - Mac - container-based infrastructure (Trusty) - sudo-enabled Precise and Trusty We will give another update on the state of the backlog for these infrastructures soon.May 8, 19:36 UTC Update - Our RabbitMQ cluster has been restored. We are now restarting our workers to start processing builds again.May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We thank you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 19:59 UTC Monitoring - All infrastructure are now processing builds. The container-based infrastructure running Precise doesn't have a backlog right now. A backlog remains, however, for the following infrastructures: - Mac - container-based infrastructure (Trusty) - sudo-enabled Precise and Trusty We will give another update on the state of the backlog for these infrastructures soon.May 8, 19:36 UTC Update - Our RabbitMQ cluster has been restored. We are now restarting our workers to start processing builds again.May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We thank you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 19:36 UTC Update - Our RabbitMQ cluster has been restored. We are now restarting our workers to start processing builds again.May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We thank you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We thank you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 19:14 UTC Update - We are currently working with our RabbitMQ provider to help us get our cluster back up. We are sorry for the continued troubles.May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We are thanking you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 18:33 UTC Update - We are still having issues with the connectivity between our components. We need to proceed to an emergency maintenance of our RabbitMQ cluster to be able to fix this issue. We are thanking you again for your patience.May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 17:44 UTC Update - We are still working on bringing back our build infrastructures and build logs are still unavailable. We have proceeded to restart one of our components and are currently assessing the resulting situation. We are sorry for the continued troubles.May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 16:34 UTC Update - Full build processing capacity has been restored on our Mac infrastructure. Other infrastructures should be back on their feet shortly. Thanks for hanging in there.May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Delays for private builds on travis-ci.com

May 8, 16:19 UTC Investigating - We are currently seeing at backlog of private builds on travis-ci.com. Hence, you might see delays before your builds start. Our Engineering Team is currently looking into it. We'll update this incident when we know more. Thank you for your enduring patience.

Last Update: A few months ago

Build logs are currently missing for private builds on travis-ci.com

May 8, 16:17 UTC Resolved - Logs should now be displaying properly. Please reach out at support [at] travis-ci [dot] com if it's not the case. Thank you! 💛May 8, 16:05 UTC Monitoring - We've been able to put back the display of build logs back on its feet. We are monitoring things to see if everything is working correctly now.May 8, 15:37 UTC Investigating - We are currently seeing reports of build logs being unavailable for private builds on travis-ci.com. Our Engineering Team is currently looking into it. We will update this incident as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build logs are currently missing for private builds on travis-ci.com

May 8, 16:05 UTC Monitoring - We've been able to put back the display of build logs back on its feet. We are monitoring things to see if everything is working correctly now.May 8, 15:37 UTC Investigating - We are currently seeing reports of build logs being unavailable for private builds on travis-ci.com. Our Engineering Team is currently looking into it. We will update this incident as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Build logs are currently missing for private builds on travis-ci.com

May 8, 15:37 UTC Investigating - We are currently seeing reports of build logs being unavailable for private builds on travis-ci.com. Our Engineering Team is currently looking into it. We will update this incident as soon as we know more. Thank you for your patience!

Last Update: A few months ago

Syncing user data with GitHub is delayed on travis-ci.org

May 7, 11:37 UTC Resolved - This incident has been resolved.May 7, 11:14 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 7, 08:39 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Syncing user data with GitHub is delayed on travis-ci.org

May 7, 11:14 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 7, 08:39 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Syncing user data with GitHub is delayed on travis-ci.org

May 7, 08:39 UTC Investigating - We are currently investigating this issue.

Last Update: A few months ago

Gem installation from rubygems.org fails on our sudo-less Precise container for both private and public builds

May 3, 00:33 UTC Resolved - The issue has been resolved.May 2, 21:53 UTC Monitoring - Gem installation should now be working. Please restart the affected builds. Thank you!May 2, 21:02 UTC Identified - We identified the issue as being unable to establish TLS handshake with https://rubygems.org with anything less than TLS v1.2. We are working with rubygems.org support and their upstream service provider to resolve this issue.May 2, 21:01 UTC Update - A side effect of this issue is that we are unable to install `dpl`, which is required for deployment.May 2, 20:31 UTC Investigating - We've received reports of gem installation failures on builds on our Precise container i.e. for builds with ``` sudo: false dist: precise # this may or may not exist ``` We are currently publicly tracking this issue here: https://github.com/travis-ci/travis-ci/issues/7685. In the meantime, we can suggest to route your builds on our sudo-enabled Precise infrastructure with the following in your .travis.yml file: ``` sudo: required dist: precise ``` Thank you for your patience!

Last Update: A few months ago

Bundler isn't found on our sudo-less Precise container for both private and public builds

May 2, 21:53 UTC Monitoring - Gem installation should now be working. Please restart the affected builds. Thank you!May 2, 21:02 UTC Identified - We identified the issue as being unable to establish TLS handshake with https://rubygems.org with anything less than TLS v1.2. We are working with rubygems.org support and their upstream service provider to resolve this issue.May 2, 21:01 UTC Update - A side effect of this issue is that we are unable to install `dpl`, which is required for deployment.May 2, 20:31 UTC Investigating - We've received reports of gem installation failures on builds on our Precise container i.e. for builds with ``` sudo: false dist: precise # this may or may not exist ``` We are currently publicly tracking this issue here: https://github.com/travis-ci/travis-ci/issues/7685. In the meantime, we can suggest to route your builds on our sudo-enabled Precise infrastructure with the following in your .travis.yml file: ``` sudo: required dist: precise ``` Thank you for your patience!

Last Update: A few months ago

Bundler isn't found on our sudo-less Precise container for both private and public builds

May 2, 21:02 UTC Identified - We identified the issue as being unable to establish TLS handshake with https://rubygems.org with anything less than TLS v1.2. We are working with rubygems.org support and their upstream service provider to resolve this issue.May 2, 21:01 UTC Update - A side effect of this issue is that we are unable to install `dpl`, which is required for deployment.May 2, 20:31 UTC Investigating - We've received reports of gem installation failures on builds on our Precise container i.e. for builds with ``` sudo: false dist: precise # this may or may not exist ``` We are currently publicly tracking this issue here: https://github.com/travis-ci/travis-ci/issues/7685. In the meantime, we can suggest to route your builds on our sudo-enabled Precise infrastructure with the following in your .travis.yml file: ``` sudo: required dist: precise ``` Thank you for your patience!

Last Update: A few months ago

Bundler isn't found on our sudo-less Precise container for both private and public builds

May 2, 21:01 UTC Update - A side effect of this issue is that we are unable to install `dpl`, which is required for deployment.May 2, 20:31 UTC Investigating - We've received reports of gem installation failures on builds on our Precise container i.e. for builds with ``` sudo: false dist: precise # this may or may not exist ``` We are currently publicly tracking this issue here: https://github.com/travis-ci/travis-ci/issues/7685. In the meantime, we can suggest to route your builds on our sudo-enabled Precise infrastructure with the following in your .travis.yml file: ``` sudo: required dist: precise ``` Thank you for your patience!

Last Update: A few months ago

Bundler isn't found on our sudo-less Precise container for both private and public builds

May 2, 20:31 UTC Investigating - We've received reports telling us Bundler isn't available for builds on our Precise container i.e. for builds with sudo: false and, optionally, dist: precise We are currently publicly tracking this issue here: https://github.com/travis-ci/travis-ci/issues/7685. In the meantime, we can suggest to route your builds on our sudo-enabled Precise infrastructure with the following in your .travis.yml file: sudo: required dist: precise Thank you for your patience!

Last Update: A few months ago

Logs display issues for public repositories

Apr 27, 08:08 UTC Resolved - We identified and fixed an issue displaying logs for public repositories after the maintenance. Logs should be displaying normally now. Thanks for your patience.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 27, 06:18 UTC Completed - The scheduled maintenance has been completed.Apr 27, 05:28 UTC Verifying - Verification is currently underway for the maintenance items.Apr 27, 05:03 UTC Update - We have extended the maintenance for another hour while waiting for an index to rebuild.Apr 27, 03:38 UTC Update - The most recently executed partitioning query is taking longer than expected, so we are extending the maintenance by 1 hour.Apr 27, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 27, 05:28 UTC Verifying - Verification is currently underway for the maintenance items.Apr 27, 05:03 UTC Update - We have extended the maintenance for another hour while waiting for an index to rebuild.Apr 27, 03:38 UTC Update - The most recently executed partitioning query is taking longer than expected, so we are extending the maintenance by 1 hour.Apr 27, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 27, 05:03 UTC Update - We have extended the maintenance for another hour while waiting for an index to rebuild.Apr 27, 03:38 UTC Update - The most recently executed partitioning query is taking longer than expected, so we are extending the maintenance by 1 hour.Apr 27, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 27, 03:38 UTC Update - The most recently executed partitioning query is taking longer than expected, so we are extending the maintenance by 1 hour.Apr 27, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 27, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 27, 01:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Logs database partition maintenance for public repos

Apr 26, 15:36 UTC Scheduled - We need to get our regular partition maintenance query back on track, which will require an interruption in log parts processing. During the maintenance, we expect jobs to continue running, including any deployment steps and GitHub status updates, but realtime log streaming and lookup of log output from *public* jobs that were running from the beginning of maintenance will be unavailable.

Last Update: A few months ago

Delays in processing builds for container-based private repositories

Apr 25, 12:54 UTC Resolved - The backlog for container-based infrastructure for private builds has cleared. Thank you for your patience.Apr 25, 11:23 UTC Identified - Surge in demand for our docker-based infrastructure for private repositories has caused a small backlog. We will scale up manually to process the backlog more quickly. We apologize for build wait times.

Last Update: A few months ago

Delays in processing builds for container-based private repositories

Apr 25, 11:23 UTC Identified - Surge in demand for our docker-based infrastructure for private repositories has caused a small backlog. We will scale up manually to process the backlog more quickly. We apologize for build wait times.

Last Update: A few months ago

Database connectivity issues

Apr 25, 05:22 UTC Resolved - This incident has been resolved.Apr 25, 04:45 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 25, 03:41 UTC Identified - Primary databases for both public and private repositories appear to have failed over to their respective standby servers. All but one application recovered automatically, and we finished reconfiguring the remaining application a few minutes ago. We are in the process of verifying behavior and checking for potential data loss.Apr 25, 03:29 UTC Investigating - We are investigating alerts that indicate issues communicating with the primary database for private repos.

Last Update: A few months ago

Database connectivity issues

Apr 25, 04:45 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 25, 03:41 UTC Identified - Primary databases for both public and private repositories appear to have failed over to their respective standby servers. All but one application recovered automatically, and we finished reconfiguring the remaining application a few minutes ago. We are in the process of verifying behavior and checking for potential data loss.Apr 25, 03:29 UTC Investigating - We are investigating alerts that indicate issues communicating with the primary database for private repos.

Last Update: A few months ago

Database connectivity issues

Apr 25, 03:41 UTC Identified - Primary databases for both public and private repositories appear to have failed over to their respective standby servers. All but one application recovered automatically, and we finished reconfiguring the remaining application a few minutes ago. We are in the process of verifying behavior and checking for potential data loss.Apr 25, 03:29 UTC Investigating - We are investigating alerts that indicate issues communicating with the primary database for private repos.

Last Update: A few months ago

Database connectivity issues

Apr 25, 03:29 UTC Investigating - We are investigating alerts that indicate issues communicating with the primary database for private repos.

Last Update: A few months ago

Database connectivity issues for private repos

Apr 25, 03:29 UTC Investigating - We are investigating alerts that indicate issues communicating with the primary database for private repos.

Last Update: A few months ago

Logs database partition maintenance

Apr 23, 18:57 UTC Completed - The scheduled maintenance has been completed.Apr 23, 18:44 UTC Verifying - Verification is currently underway for the maintenance items.Apr 23, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 17, 21:34 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.com in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 23, 18:44 UTC Verifying - Verification is currently underway for the maintenance items.Apr 23, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 17, 21:34 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.com in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 23, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 17, 21:34 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.com in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 17, 21:34 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.com in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 16, 19:58 UTC Completed - The scheduled maintenance has been completed.Apr 16, 19:30 UTC Verifying - Verification is currently underway for the maintenance items.Apr 16, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 14, 17:57 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.org in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 16, 19:30 UTC Verifying - Verification is currently underway for the maintenance items.Apr 16, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 14, 17:57 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.org in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 16, 18:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 14, 17:57 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.org in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Logs database partition maintenance

Apr 14, 17:57 UTC Scheduled - We are introducing the use of partitions to our logs database for travis-ci.org in order to ensure we can continue to scale the existing design. This will require degraded service while introducing a gap in logs processing, followed by a database upgrade and migration. Upon completion, we expect job processing to continue as usual and quickly catch up with the backlog that accumulates during the maintenance.

Last Update: A few months ago

Delays on container-based paid builds

Apr 13, 16:14 UTC Resolved - Capacity is now back to normal on our container-based infrastructure for travis-ci.com. Thank you for your patience!Apr 13, 15:35 UTC Investigating - We are investigating delays starting container-based builds on travis-ci.com.

Last Update: A few months ago

Delays on container-based paid builds

Apr 13, 15:35 UTC Investigating - We are investigating delays starting container-based builds on travis-ci.com.

Last Update: A few months ago

travis-ci.org logs database outage.

Apr 10, 16:37 UTC Resolved - This incident has been resolved.Apr 10, 15:34 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 10, 15:28 UTC Update - We have completed the data restoration process. We will be re-enabling mutability of the affected records shortly.Apr 7, 14:52 UTC Update - We are continuing work on restoring historical data, which we expect will take between 1-3 days. We will be updating this incident once the restoration process is complete.Mar 31, 16:12 UTC Identified - We apologize for the delay in updating this incident. Due to database infrastructure issues we have been unable to successfully extract all data from our previous primary database. Any jobs that completed prior to the incident cannot be effectively restarted for updated logs. We continue working towards restoring the relevant records. This is estimated to be a ~5 day process due to the size of the database tables. We are extremely sorry and just as frustrated with this as you probably are. We will provide another update once we have more information. Please email support@travis-ci.com if you have any questions in the mean time.Mar 30, 01:42 UTC Update - We apologize for the delay in updating this incident. Our earlier database maintenance included needing to put a new, empty logs database into production. The previous database still contains the only copy of parts of some build logs, written in the few minutes prior to when the database was no longer able to accept writes. We've been working towards extracting that data this afternoon; so that we can get it archived into S3 and make it accessible again. However, due to the effects of the original problem, we need the assistance of our database infrastructure provider in performing some actions on the previous primary database. We've escalated this with them, but do not currently have an ETA on when this work will be completed. We will provide another update once we have an update from them. Please email support@travis-ci.com if you have any questions in the mean time.Mar 29, 20:01 UTC Monitoring - If you restart jobs that completed before 18:00 UTC , you will continue to see the previous job's output or inconsistent logs displayed. We are working on a resolution for this but it may be a few hours before this is done.Mar 29, 19:50 UTC Update - We are resuming build processing. New build logs should appear, but recent build logs may be unavailable until further notice. We also expect some delay in build processing while we catch up with the back logs.Mar 29, 19:23 UTC Update - We are continuing our database maintenance.Mar 29, 18:52 UTC Update - We are performing emergency database maintenance. We hope to resume processing build requests shortly.Mar 29, 18:22 UTC Update - We are stopping accepting builds while we perform emergency maintenance.Mar 29, 18:06 UTC Identified - Logs database is having problems with writing logs. We are working with our service provider to resolve this issue.Mar 29, 18:00 UTC Investigating - We are investigating an .org logs database problem. Logs are unavailable at this time.

Last Update: A few months ago

travis-ci.org logs database outage.

Apr 10, 15:34 UTC Monitoring - A fix has been implemented and we are monitoring the results.Apr 10, 15:28 UTC Update - We have completed the data restoration process. We will be re-enabling mutability of the affected records shortly.Apr 7, 14:52 UTC Update - We are continuing work on restoring historical data, which we expect will take between 1-3 days. We will be updating this incident once the restoration process is complete.Mar 31, 16:12 UTC Identified - We apologize for the delay in updating this incident. Due to database infrastructure issues we have been unable to successfully extract all data from our previous primary database. Any jobs that completed prior to the incident cannot be effectively restarted for updated logs. We continue working towards restoring the relevant records. This is estimated to be a ~5 day process due to the size of the database tables. We are extremely sorry and just as frustrated with this as you probably are. We will provide another update once we have more information. Please email support@travis-ci.com if you have any questions in the mean time.Mar 30, 01:42 UTC Update - We apologize for the delay in updating this incident. Our earlier database maintenance included needing to put a new, empty logs database into production. The previous database still contains the only copy of parts of some build logs, written in the few minutes prior to when the database was no longer able to accept writes. We've been working towards extracting that data this afternoon; so that we can get it archived into S3 and make it accessible again. However, due to the effects of the original problem, we need the assistance of our database infrastructure provider in performing some actions on the previous primary database. We've escalated this with them, but do not currently have an ETA on when this work will be completed. We will provide another update once we have an update from them. Please email support@travis-ci.com if you have any questions in the mean time.Mar 29, 20:01 UTC Monitoring - If you restart jobs that completed before 18:00 UTC , you will continue to see the previous job's output or inconsistent logs displayed. We are working on a resolution for this but it may be a few hours before this is done.Mar 29, 19:50 UTC Update - We are resuming build processing. New build logs should appear, but recent build logs may be unavailable until further notice. We also expect some delay in build processing while we catch up with the back logs.Mar 29, 19:23 UTC Update - We are continuing our database maintenance.Mar 29, 18:52 UTC Update - We are performing emergency database maintenance. We hope to resume processing build requests shortly.Mar 29, 18:22 UTC Update - We are stopping accepting builds while we perform emergency maintenance.Mar 29, 18:06 UTC Identified - Logs database is having problems with writing logs. We are working with our service provider to resolve this issue.Mar 29, 18:00 UTC Investigating - We are investigating an .org logs database problem. Logs are unavailable at this time.

Last Update: A few months ago

travis-ci.org logs database outage.

Apr 10, 15:28 UTC Update - We have completed the data restoration process. We will be re-enabling mutability of the affected records shortly.Apr 7, 14:52 UTC Update - We are continuing work on restoring historical data, which we expect will take between 1-3 days. We will be updating this incident once the restoration process is complete.Mar 31, 16:12 UTC Identified - We apologize for the delay in updating this incident. Due to database infrastructure issues we have been unable to successfully extract all data from our previous primary database. Any jobs that completed prior to the incident cannot be effectively restarted for updated logs. We continue working towards restoring the relevant records. This is estimated to be a ~5 day process due to the size of the database tables. We are extremely sorry and just as frustrated with this as you probably are. We will provide another update once we have more information. Please email support@travis-ci.com if you have any questions in the mean time.Mar 30, 01:42 UTC Update - We apologize for the delay in updating this incident. Our earlier database maintenance included needing to put a new, empty logs database into production. The previous database still contains the only copy of parts of some build logs, written in the few minutes prior to when the database was no longer able to accept writes. We've been working towards extracting that data this afternoon; so that we can get it archived into S3 and make it accessible again. However, due to the effects of the original problem, we need the assistance of our database infrastructure provider in performing some actions on the previous primary database. We've escalated this with them, but do not currently have an ETA on when this work will be completed. We will provide another update once we have an update from them. Please email support@travis-ci.com if you have any questions in the mean time.Mar 29, 20:01 UTC Monitoring - If you restart jobs that completed before 18:00 UTC , you will continue to see the previous job's output or inconsistent logs displayed. We are working on a resolution for this but it may be a few hours before this is done.Mar 29, 19:50 UTC Update - We are resuming build processing. New build logs should appear, but recent build logs may be unavailable until further notice. We also expect some delay in build processing while we catch up with the back logs.Mar 29, 19:23 UTC Update - We are continuing our database maintenance.Mar 29, 18:52 UTC Update - We are performing emergency database maintenance. We hope to resume processing build requests shortly.Mar 29, 18:22 UTC Update - We are stopping accepting builds while we perform emergency maintenance.Mar 29, 18:06 UTC Identified - Logs database is having problems with writing logs. We are working with our service provider to resolve this issue.Mar 29, 18:00 UTC Investigating - We are investigating an .org logs database problem. Logs are unavailable at this time.

Last Update: A few months ago

Slow network connection between sudo-enabled infrastructure and Azure

Apr 8, 14:07 UTC Resolved - Network performance is back to normal.Apr 8, 00:55 UTC Monitoring - We are seeing promising reports of the improved network performance. If your builds were affected previously, please restart them to upload again.Apr 7, 14:34 UTC Investigating - We have received reports of slow network connection between jobs running on sudo-enabled infrastructure and Microsoft Azure servers. This typically manifests when the job attempts to upload Docker images and times out after 10 minutes. The problem started around 2017-04-06 01:00 UTC (about 40 hours ago). We are working with our infrastructure service provider to resolve this issue.

Last Update: A few months ago

Slow network connection between sudo-enabled infrastructure and Azure

Apr 8, 00:55 UTC Monitoring - We are seeing promising reports of the improved network performance. If your builds were affected previously, please restart them to upload again.Apr 7, 14:34 UTC Investigating - We have received reports of slow network connection between jobs running on sudo-enabled infrastructure and Microsoft Azure servers. This typically manifests when the job attempts to upload Docker images and times out after 10 minutes. The problem started around 2017-04-06 01:00 UTC (about 40 hours ago). We are working with our infrastructure service provider to resolve this issue.

Last Update: A few months ago

travis-ci.org logs database outage.

Apr 7, 14:52 UTC Update - We are continuing work on restoring historical data, which we expect will take between 1-3 days. We will be updating this incident once the restoration process is complete.Mar 31, 16:12 UTC Identified - We apologize for the delay in updating this incident. Due to database infrastructure issues we have been unable to successfully extract all data from our previous primary database. Any jobs that completed prior to the incident cannot be effectively restarted for updated logs. We continue working towards restoring the relevant records. This is estimated to be a ~5 day process due to the size of the database tables. We are extremely sorry and just as frustrated with this as you probably are. We will provide another update once we have more information. Please email support@travis-ci.com if you have any questions in the mean time.Mar 30, 01:42 UTC Update - We apologize for the delay in updating this incident. Our earlier database maintenance included needing to put a new, empty logs database into production. The previous database still contains the only copy of parts of some build logs, written in the few minutes prior to when the database was no longer able to accept writes. We've been working towards extracting that data this afternoon; so that we can get it archived into S3 and make it accessible again. However, due to the effects of the original problem, we need the assistance of our database infrastructure provider in performing some actions on the previous primary database. We've escalated this with them, but do not currently have an ETA on when this work will be completed. We will provide another update once we have an update from them. Please email support@travis-ci.com if you have any questions in the mean time.Mar 29, 20:01 UTC Monitoring - If you restart jobs that completed before 18:00 UTC , you will continue to see the previous job's output or inconsistent logs displayed. We are working on a resolution for this but it may be a few hours before this is done.Mar 29, 19:50 UTC Update - We are resuming build processing. New build logs should appear, but recent build logs may be unavailable until further notice. We also expect some delay in build processing while we catch up with the back logs.Mar 29, 19:23 UTC Update - We are continuing our database maintenance.Mar 29, 18:52 UTC Update - We are performing emergency database maintenance. We hope to resume processing build requests shortly.Mar 29, 18:22 UTC Update - We are stopping accepting builds while we perform emergency maintenance.Mar 29, 18:06 UTC Identified - Logs database is having problems with writing logs. We are working with our service provider to resolve this issue.Mar 29, 18:00 UTC Investigating - We are investigating an .org logs database problem. Logs are unavailable at this time.

Last Update: A few months ago

travis-ci.org logs database outage.

Apr 7, 14:52 UTC Update - We are continuing work on restoring historical data and will be updating this incident once the restoration process is complete.Mar 31, 16:12 UTC Identified - We apologize for the delay in updating this incident. Due to database infrastructure issues we have been unable to successfully extract all data from our previous primary database. Any jobs that completed prior to the incident cannot be effectively restarted for updated logs. We continue working towards restoring the relevant records. This is estimated to be a ~5 day process due to the size of the database tables. We are extremely sorry and just as frustrated with this as you probably are. We will provide another update once we have more information. Please email support@travis-ci.com if you have any questions in the mean time.Mar 30, 01:42 UTC Update - We apologize for the delay in updating this incident. Our earlier database maintenance included needing to put a new, empty logs database into production. The previous database still contains the only copy of parts of some build logs, written in the few minutes prior to when the database was no longer able to accept writes. We've been working towards extracting that data this afternoon; so that we can get it archived into S3 and make it accessible again. However, due to the effects of the original problem, we need the assistance of our database infrastructure provider in performing some actions on the previous primary database. We've escalated this with them, but do not currently have an ETA on when this work will be completed. We will provide another update once we have an update from them. Please email support@travis-ci.com if you have any questions in the mean time.Mar 29, 20:01 UTC Monitoring - If you restart jobs that completed before 18:00 UTC , you will continue to see the previous job's output or inconsistent logs displayed. We are working on a resolution for this but it may be a few hours before this is done.Mar 29, 19:50 UTC Update - We are resuming build processing. New build logs should appear, but recent build logs may be unavailable until further notice. We also expect some delay in build processing while we catch up with the back logs.Mar 29, 19:23 UTC Update - We are continuing our database maintenance.Mar 29, 18:52 UTC Update - We are performing emergency database maintenance. We hope to resume processing build requests shortly.Mar 29, 18:22 UTC Update - We are stopping accepting builds while we perform emergency maintenance.Mar 29, 18:06 UTC Identified - Logs database is having problems with writing logs. We are working with our service provider to resolve this issue.Mar 29, 18:00 UTC Investigating - We are investigating an .org logs database problem. Logs are unavailable at this time.

Last Update: A few months ago

Slow network connection between sudo-enabled infrastructure and Azure

Apr 7, 14:34 UTC Investigating - We have received reports of slow network connection between jobs running on sudo-enabled infrastructure and Microsoft Azure servers. This typically manifests when the job attempts to upload Docker images and times out after 10 minutes. The problem started around 2017-04-06 01:00 UTC (about 40 hours ago). We are working with our infrastructure service provider to resolve this issue.

Last Update: A few months ago

Build delays on macOS infrastructure

Apr 5, 12:23 UTC Resolved - macOS builds are once again being processed normally.Apr 5, 11:13 UTC Monitoring - We've corrected the network settings and are currently monitoring the situation.Apr 5, 10:45 UTC Identified - We've identified the underlying cause as a network misconfiguration on one of our base build VM's.Apr 5, 10:16 UTC Investigating - We are investigating issues with our macOS infrastructure, which are resulting in build delays for a limited amount of jobs.

Last Update: A few months ago

Build delays on macOS infrastructure

Apr 5, 11:13 UTC Monitoring - We've corrected the network settings and are currently monitoring the situation.Apr 5, 10:45 UTC Identified - We've identified the underlying cause as a network misconfiguration on one of our base build VM's.Apr 5, 10:16 UTC Investigating - We are investigating issues with our macOS infrastructure, which are resulting in build delays for a limited amount of jobs.

Last Update: A few months ago

Build delays on macOS infrastructure

Apr 5, 10:45 UTC Identified - We've identified the underlying cause as a network misconfiguration on one of our base build VM's.Apr 5, 10:16 UTC Investigating - We are investigating issues with our macOS infrastructure, which are resulting in build delays for a limited amount of jobs.

Last Update: A few months ago

Build delays on macOS infrastructure

Apr 5, 10:16 UTC Investigating - We are investigating issues with our macOS infrastructure, which are resulting in build delays for a limited amount of jobs.

Last Update: A few months ago

Private MacOS Builds backlog

Apr 4, 18:42 UTC Resolved - MacOS private builds backlog has cleared.Apr 4, 16:42 UTC Identified - Private MacOS builds are experiencing a backlog at the moment, we've increase capacity to get through it quicker, expect wait times for another 30 minutes.

Last Update: A few months ago

Private MacOS Builds backlog

Apr 4, 16:42 UTC Identified - Private MacOS builds are experiencing a backlog at the moment, we've increase capacity to get through it quicker, expect wait times for another 30 minutes.

Last Update: A few months ago

Delays for sudo-enabled Linux builds

Mar 22, 16:59 UTC Resolved - The issue was confirmed fixed by our infrastructure provider, and we're no longer seeing errors while booting instances. Jobs are running normally across all our infrastructures at this time.Mar 22, 16:29 UTC Identified - Our infrastructure provider has identified an issue and is working on a fix.Mar 22, 15:46 UTC Investigating - We're investigating elevated error rates while booting instances for sudo-enabled Linux builds, which is causing jobs to take longer to start.

Last Update: A few months ago

Logs display issues for public projects using travis-ci.org

Mar 23, 15:32 UTC Resolved - We have confirmed the effectiveness of the short-term fix. We will continue investigating for a longer-term resolution. Thank you for your patience.Mar 23, 14:55 UTC Monitoring - We have identified the problematic component. While we continue investigating a long-term fix, we have applied a short-time one to have the logs load correctly.Mar 23, 14:13 UTC Investigating - We’re investigating issues displaying public logs while builds are running.

Last Update: A few months ago

Delayed build requests due to webhook delivery delays on GitHub

Mar 24, 12:36 UTC Resolved - Webhook delivery delays have been resolved on GitHub. We have confirmed we're receiving build request events immediately.Mar 24, 12:27 UTC Identified - We are experiencing delayed build requests due to webhook delivery delays on GitHub (up to ~3min). We are monitoring the situation. Also see https://status.github.com.

Last Update: A few months ago

Logs display issues

Mar 27, 08:26 UTC Resolved - Backlog for both open-source-builds and private-builds has now cleared. Thank you for your patience.Mar 27, 08:22 UTC Update - The backlog for our open-source-builds has cleared. The private-builds backlog continues to drop.Mar 27, 08:02 UTC Monitoring - We have identified the issue, applied a fix, and are monitoring as things recover. Display of newer logs may be delayed until we process the backlog.Mar 27, 08:02 UTC Identified - We have identified the issue, applied a fix, and are monitoring as things recover. Display of newer logs may be delayed until we process the backlog.Mar 27, 07:40 UTC Investigating - We’re investigating issues with logs not being displayed.

Last Update: A few months ago

API maintenance for travis-ci.org and travis-ci.com

Mar 27, 21:35 UTC Completed - The scheduled maintenance has been completed.Mar 27, 20:52 UTC Verifying - We've completed the update of the API components on both travis-ci.org and travis-ci.com. We are continuing to monitor closely.Mar 27, 19:43 UTC Update - We've finished the update of the API components on travis-ci.org and we are monitoring things closely. We shall begin updating travis-ci.com shortly.Mar 27, 18:30 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 27, 18:08 UTC Scheduled - We will be performing a maintenance on our API for more efficient access to our logs database. We will first update the API for travis-ci.org and then travis-ci.com with a least an hour in between for monitoring. We don't expect any visible downtime or user impact during the maintenance. Please contact support [at] travis-ci [dot] com if you see something.

Last Update: A few months ago

Logs display delayed

Mar 28, 11:45 UTC Resolved - This issue has been resolved and logs are working as expected now. Thanks for your patience!Mar 28, 11:03 UTC Monitoring - We've deployed a fix for this issue and logs should be rendering now properly. We're currently monitoring. Please email support@travis-ci.com if you notice any log hiccups.Mar 28, 10:18 UTC Identified - We’ve identified an issue that is causing some logs for both public and private repositories to not render properly. We’re working on a fix for this.Mar 28, 07:52 UTC Investigating - We are investigating reports that logs are not always displayed in web.

Last Update: A few months ago

Heightened rate of API errors

Mar 29, 14:09 UTC Resolved - Systems operating normally. Thanks for bearing with us!Mar 29, 13:50 UTC Monitoring - Maintenance has been completed and services appear to be stable. We will continue to monitor.Mar 29, 13:26 UTC Identified - We mitigated the issue and API error rates have recovered. We are continuing with maintenance to make sure the API is stable.Mar 29, 13:04 UTC Investigating - We are investigating reports of the higher rate of API errors for .org.

Last Update: A few months ago

travis-ci.org logs database outage.

Mar 31, 16:12 UTC Identified - We apologize for the delay in updating this incident. Due to database infrastructure issues we have been unable to successfully extract all data from our previous primary database. Any jobs that completed prior to the incident cannot be effectively restarted for updated logs. We continue working towards restoring the relevant records. This is estimated to be a ~5 day process due to the size of the database tables. We are extremely sorry and just as frustrated with this as you probably are. We will provide another update once we have more information. Please email support@travis-ci.com if you have any questions in the mean time.Mar 30, 01:42 UTC Update - We apologize for the delay in updating this incident. Our earlier database maintenance included needing to put a new, empty logs database into production. The previous database still contains the only copy of parts of some build logs, written in the few minutes prior to when the database was no longer able to accept writes. We've been working towards extracting that data this afternoon; so that we can get it archived into S3 and make it accessible again. However, due to the effects of the original problem, we need the assistance of our database infrastructure provider in performing some actions on the previous primary database. We've escalated this with them, but do not currently have an ETA on when this work will be completed. We will provide another update once we have an update from them. Please email support@travis-ci.com if you have any questions in the mean time.Mar 29, 20:01 UTC Monitoring - If you restart jobs that completed before 18:00 UTC , you will continue to see the previous job's output or inconsistent logs displayed. We are working on a resolution for this but it may be a few hours before this is done.Mar 29, 19:50 UTC Update - We are resuming build processing. New build logs should appear, but recent build logs may be unavailable until further notice. We also expect some delay in build processing while we catch up with the back logs.Mar 29, 19:23 UTC Update - We are continuing our database maintenance.Mar 29, 18:52 UTC Update - We are performing emergency database maintenance. We hope to resume processing build requests shortly.Mar 29, 18:22 UTC Update - We are stopping accepting builds while we perform emergency maintenance.Mar 29, 18:06 UTC Identified - Logs database is having problems with writing logs. We are working with our service provider to resolve this issue.Mar 29, 18:00 UTC Investigating - We are investigating an .org logs database problem. Logs are unavailable at this time.

Last Update: A few months ago

Delays in macOS jobs starting for both public and private repositories.

Mar 31, 17:02 UTC Resolved - We've been able to stabilize things and all delays have been resolved at this point. If you continue to see any issues, please email support@travis-ci.comMar 31, 14:33 UTC Identified - We are working to recover the physical hosts and isolate them. We've made some improvements and you should see reduced wait times but delays will continue for now. We'll provide an update when we know more.Mar 31, 14:18 UTC Investigating - We are investigating issues with multiple physical hosts, which is resulting in delays in macOS builds for both public and private repositories.

Last Update: A few months ago

Mac infrastructure network maintenance

Apr 2, 22:12 UTC Completed - We ran into a bug with one of the new networking components, and have decided to cancel the second stage of the maintenance, and reschedule it for a later time. We've finished the cleanup and we're closing tonight's maintenance window now.Apr 2, 22:03 UTC Verifying - We're working on some finishing cleanup and will be finishing the maintenance window shortly.Apr 2, 20:58 UTC Update - We're still working on the second stage of the maintenance, so we are extending the maintenance window.Apr 2, 20:00 UTC Update - We're starting the second stage of the maintenance now.Apr 2, 19:05 UTC Update - We've finished the first stage of the maintenance. We will start the second stage of the maintenance at 20:00 UTC , in just under an hour.Apr 2, 17:06 UTC In progress - We're beginning the Mac infrastructure network maintenance..Mar 24, 09:30 UTC Scheduled - On Sunday April 2nd, 2017, from 17:00 to 21:00 UTC , we will be performing network maintenance on our Mac infrastructure to improve and test the redundancy in our network stack. The maintenance will be performed in two stages, the first starting at 17:00 UTC and the second starting at 20:00 UTC . We're not expecting any user-visible impact apart from brief packet loss as we perform failover tests.

Last Update: A few months ago

Build delays for public repositories

Apr 3, 20:46 UTC Resolved - Due to high demand on our public repos, we still have a small backlog for sudo-enabled and MacOS builds. Expect short wait times to persist through the afternoon PST /UTC -7. Jobs continued to be processed at full capacity.Apr 3, 18:14 UTC Monitoring - We've identified the issue. It was an enqueue error on our scheduler due to a locked db query. The block was removed, and jobs are being scheduled at full capacity. We will continue to monitor while the backlog of jobs for public repositories clears.Apr 3, 17:12 UTC Investigating - We're investigating build delays affecting all public repositories running at travis-ci.org.

Last Update: A few months ago

Delays for sudo-enabled Linux builds

Mar 22, 15:46 UTC Investigating - We're investigating elevated error rates while booting instances for sudo-enabled Linux builds, which is causing jobs to take longer to start.

Last Update: A few months ago

travis-ci.org website potentially unreachable or slow

Mar 20, 23:43 UTC Resolved - Things have been reliable and stable, so we are resolving this issue.Mar 20, 16:59 UTC Monitoring - The unresponsive host has been removed from the set of DNS records. The site should be fully reachable again. We continue to monitor the situation.Mar 20, 16:29 UTC Identified - We are receiving reports of travis-ci.org being unreachable for some users. One of the 8 edge server IPs appears to be unreachable via TCP. We have escalated the issue to our upstream provider.

Last Update: A few months ago

travis-ci.org website potentially unreachable or slow

Mar 20, 16:59 UTC Monitoring - The unresponsive host has been removed from the set of DNS records. The site should be fully reachable again. We continue to monitor the situation.Mar 20, 16:29 UTC Identified - We are receiving reports of travis-ci.org being unreachable for some users. One of the 8 edge server IPs appears to be unreachable via TCP. We have escalated the issue to our upstream provider.

Last Update: A few months ago

travis-ci.org website potentially unreachable or slow

Mar 20, 16:29 UTC Identified - We are receiving reports of travis-ci.org being unreachable for some users. One of the 8 edge server IPs appears to be unreachable via TCP. We have escalated the issue to our upstream provider.

Last Update: A few months ago

Intermittent slowness from both nodejs.org and registry.yarnpkg.com on container-based builds for public and private repositories

Mar 17, 02:45 UTC Resolved - User reports of issues have cleared up as of approximately Noon PST. We apologize for the delays in resolving this issue. Please email support@travis-ci.com if you see any further issues.Mar 16, 15:30 UTC Update - We are seeing some reports of similar intermittent slowness from both nodejs.org and registry.yarnpkg.com from other users outside of Travis CI's infrastructure, which seems to indicate a potential upstream issue. Restarting affected builds can help and this is recommended for the moment. We are continuing to investigate.Mar 16, 14:20 UTC Investigating - We’re currently investigating slowness and connectivity issues on `sudo: false`, container-based builds. This seems to be specially affecting Node.js installations via nvm.

Last Update: A few months ago

Intermittent slowness from both nodejs.org and registry.yarnpkg.com on container-based builds for public and private repositories

Mar 16, 15:30 UTC Update - We are seeing some reports of similar intermittent slowness from both nodejs.org and registry.yarnpkg.com from other users outside of Travis CI's infrastructure, which seems to indicate a potential upstream issue. Restarting affected builds can help and this is recommended for the moment. We are continuing to investigate.Mar 16, 14:20 UTC Investigating - We’re currently investigating slowness and connectivity issues on `sudo: false`, container-based builds. This seems to be specially affecting Node.js installations via nvm.

Last Update: A few months ago

Networking issues on container-based builds for public and private repositories

Mar 16, 15:30 UTC Update - We are seeing some reports of similar intermittent slowness from both nodejs.org and registry.yarnpkg.com from other users outside of Travis CI's infrastructure, which seems to indicate a potential upstream issue. Restarting affected builds can help and this is recommended for the moment. We are continuing to investigate.Mar 16, 14:20 UTC Investigating - We’re currently investigating slowness and connectivity issues on `sudo: false`, container-based builds. This seems to be specially affecting Node.js installations via nvm.

Last Update: A few months ago

Networking issues on container-based builds for public and private repositories

Mar 16, 14:20 UTC Investigating - We’re currently investigating slowness and connectivity issues on `sudo: false`, container-based builds. This seems to be specially affecting Node.js installations via nvm.

Last Update: A few months ago

Planned: Reduced public macOS capacity

Mar 16, 02:40 UTC Resolved - At this time things are now running at our current full capacity for public macOS builds. Please email support@travis-ci.com if you have any questions. Thank you for your patience while we worked through some underlying improvements that required reduced capacity today.Mar 16, 01:29 UTC Update - We are beginning to add capacity back for public macOS jobs and will provide an update when this process is finished.Mar 15, 18:08 UTC Identified - We are in the process of reducing capacity for public macOS builds, in order to support moving some of our hardware capacity for infrastructure improvements. We'll be at reduced capacity for a few hours and we'll provide periodic updates. Please email support@travis-ci.com if you have any questions.

Last Update: A few months ago

Planned: Reduced public macOS capacity

Mar 16, 01:29 UTC Update - We are beginning to add capacity back for public macOS jobs and will provide an update when this process is finished.Mar 15, 18:08 UTC Identified - We are in the process of reducing capacity for public macOS builds, in order to support moving some of our hardware capacity for infrastructure improvements. We'll be at reduced capacity for a few hours and we'll provide periodic updates. Please email support@travis-ci.com if you have any questions.

Last Update: A few months ago

Network connectivity issues for public and private sudo enabled builds

Mar 15, 22:32 UTC Resolved - Google has resolved the issue and we're seeing jobs executing as expected. Please email support@travis-ci.com if you continue to see any network issues. Thanks for your patience during this issue.Mar 15, 21:21 UTC Update - We're seeing job error rates start to drop, which indicates the issue may be improving, but we're waiting on updates from Google Cloud's status site, https://status.cloud.google.com/incident/compute/17006.Mar 15, 21:00 UTC Identified - Our cloud provider for sudo enabled builds, Google Compute Engine (GCE) is currently experiencing network connectivity issues and this affecting public and private builds. We're monitoring their status incident, https://status.cloud.google.com/incident/compute/17006, closely and will provide updates as we get them. During this incident you'll see failures connectivity to external resources like Launchpad PPAs, PyPI, RubyGems, etc.

Last Update: A few months ago

Network connectivity issues for public and private sudo enabled builds

Mar 15, 21:21 UTC Update - We're seeing job error rates start to drop, which indicates the issue may be improving, but we're waiting on updates from Google Cloud's status site, https://status.cloud.google.com/incident/compute/17006.Mar 15, 21:00 UTC Identified - Our cloud provider for sudo enabled builds, Google Compute Engine (GCE) is currently experiencing network connectivity issues and this affecting public and private builds. We're monitoring their status incident, https://status.cloud.google.com/incident/compute/17006, closely and will provide updates as we get them. During this incident you'll see failures connectivity to external resources like Launchpad PPAs, PyPI, RubyGems, etc.

Last Update: A few months ago

Network connectivity issues for public and private sudo enabled builds

Mar 15, 21:00 UTC Identified - Our cloud provider for sudo enabled builds, Google Compute Engine (GCE) is currently experiencing network connectivity issues and this affecting public and private builds. We're monitoring their status incident, https://status.cloud.google.com/incident/compute/17006, closely and will provide updates as we get them. During this incident you'll see failures connectivity to external resources like Launchpad PPAs, PyPI, RubyGems, etc.

Last Update: A few months ago

Network connectivity issues for public and private sudo enabled builds

Mar 15, 21:00 UTC Identified - Our cloud provider for sud enabled builds, Google Compute Engine (GCE) is currently experiencing network connectivity issues and this affecting public and private builds. We're monitoring their status incident, https://status.cloud.google.com/incident/compute/17006, closely and will provide updates as we get them. During this incident you'll see failures connectivity to external resources like Launchpad PPAs, PyPI, RubyGems, etc.

Last Update: A few months ago

Planned: Reduced public macOS capacity

Mar 15, 18:08 UTC Identified - We are in the process of reducing capacity for public macOS builds, in order to support moving some of our hardware capacity for infrastructure improvements. We'll be at reduced capacity for a few hours and we'll provide periodic updates. Please email support@travis-ci.com if you have any questions.

Last Update: A few months ago

Planned: Reduced public macOS capacity

Mar 15, 18:08 UTC Identified - We are in the process of reducing capacity for public macOS builds, in order to support moving some of our hardware capacity for infrastructure improvements. We'll be at reduced capacity for a few hours and we'll provide periodic updates. Please email support@travis-ci.com if you have any questions.

Last Update: A few months ago

API errors for travis-ci.com

Mar 15, 10:56 UTC Resolved - The API is responding normally again.Mar 15, 10:31 UTC Investigating - We're investigating elevated error rates for the travis-ci.com API, which may also affect the web UI.

Last Update: A few months ago

API errors for travis-ci.com

Mar 15, 10:31 UTC Investigating - We're investigating elevated error rates for the travis-ci.com API, which may also affect the web UI.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 15, 02:45 UTC Resolved - At this time we've restarted the errored jobs and they are in our backlog again. We are processing jobs at full capacity, with the usual backlog due to high demand for public macOS builds. We'll be publishing a postmortem of this incident by the end of the week. We thank you for your patience and understanding while we resolved this issue.Mar 15, 01:52 UTC Update - A mistake in the order we brought up some of the services resulted in all pending macOS jobs "erroring out" quickly. We are currently working on resetting the state of those jobs so that they'll be queued and run again. We are very sorry for this issue and will post an update when the jobs have been requeued.Mar 15, 01:36 UTC Update - We are beginning to ramp up the capacity and are monitoring things closely.Mar 15, 01:10 UTC Update - We've been able to restore the backplane to service and we're working on verification and preparing to resume builds and ramp back up to full capacity.Mar 15, 00:47 UTC Update - The backplane has come up in an unexpected state and we're escalating with our infrastructure provider, as we'll need their help in resolving this issue. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time. Thank you for your patience while we work to resolve this.Mar 15, 00:23 UTC Identified - The issue has been identified and a fix is being implemented.Mar 15, 00:22 UTC Update - The control plane restart is still in progress, we discovered that the root filesystem partition had filled up and we weren't alerted to this issue. We've cleaned up the filesystem and are still working to get the backplane services started up again. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 15, 01:52 UTC Update - A mistake in the order we brought up some of the services resulted in all pending macOS jobs "erroring out" quickly. We are currently working on resetting the state of those jobs so that they'll be queued and run again. We are very sorry for this issue and will post an update when the jobs have been requeued.Mar 15, 01:36 UTC Update - We are beginning to ramp up the capacity and are monitoring things closely.Mar 15, 01:10 UTC Update - We've been able to restore the backplane to service and we're working on verification and preparing to resume builds and ramp back up to full capacity.Mar 15, 00:47 UTC Update - The backplane has come up in an unexpected state and we're escalating with our infrastructure provider, as we'll need their help in resolving this issue. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time. Thank you for your patience while we work to resolve this.Mar 15, 00:23 UTC Identified - The issue has been identified and a fix is being implemented.Mar 15, 00:22 UTC Update - The control plane restart is still in progress, we discovered that the root filesystem partition had filled up and we weren't alerted to this issue. We've cleaned up the filesystem and are still working to get the backplane services started up again. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 15, 01:36 UTC Update - We are beginning to ramp up the capacity and are monitoring things closely.Mar 15, 01:10 UTC Update - We've been able to restore the backplane to service and we're working on verification and preparing to resume builds and ramp back up to full capacity.Mar 15, 00:47 UTC Update - The backplane has come up in an unexpected state and we're escalating with our infrastructure provider, as we'll need their help in resolving this issue. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time. Thank you for your patience while we work to resolve this.Mar 15, 00:23 UTC Identified - The issue has been identified and a fix is being implemented.Mar 15, 00:22 UTC Update - The control plane restart is still in progress, we discovered that the root filesystem partition had filled up and we weren't alerted to this issue. We've cleaned up the filesystem and are still working to get the backplane services started up again. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 15, 01:10 UTC Update - We've been able to restore the backplane to service and we're working on verification and preparing to resume builds and ramp back up to full capacity.Mar 15, 00:47 UTC Update - The backplane has come up in an unexpected state and we're escalating with our infrastructure provider, as we'll need their help in resolving this issue. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time. Thank you for your patience while we work to resolve this.Mar 15, 00:23 UTC Identified - The issue has been identified and a fix is being implemented.Mar 15, 00:22 UTC Update - The control plane restart is still in progress, we discovered that the root filesystem partition had filled up and we weren't alerted to this issue. We've cleaned up the filesystem and are still working to get the backplane services started up again. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 15, 00:47 UTC Update - The backplane has come up in an unexpected state and we're escalating with our infrastructure provider, as we'll need their help in resolving this issue. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time. Thank you for your patience while we work to resolve this.Mar 15, 00:23 UTC Identified - The issue has been identified and a fix is being implemented.Mar 15, 00:22 UTC Update - The control plane restart is still in progress, we discovered that the root filesystem partition had filled up and we weren't alerted to this issue. We've cleaned up the filesystem and are still working to get the backplane services started up again. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 15, 00:23 UTC Identified - The issue has been identified and a fix is being implemented.Mar 15, 00:22 UTC Update - The control plane restart is still in progress, we discovered that the root filesystem partition had filled up and we weren't alerted to this issue. We've cleaned up the filesystem and are still working to get the backplane services started up again. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 14, 23:43 UTC Update - The "control backplane" for part of our virtualization infrastructure is unstable, so we're initiating a restart of the backplane. In the mean time we continue to run public macOS builds with degraded capacity. travis-ci.com jobs are not affected at this time.Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 14, 23:27 UTC Update - We are investigating some intermittent stability errors with some of our physical servers for this infrastructure and we are working to restore stability to this. At this time we're running public macOS builds with degraded capacity.Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Seeing high amounts of errors when trying to start public macOS jobs

Mar 14, 22:59 UTC Investigating - We are seeing high amounts of errors when trying to start public macOS jobs. This is causing build delays for travis-ci.org macOS builds and we are investigating why.

Last Update: A few months ago

Delays creating Pull Request builds

Mar 14, 15:51 UTC Resolved - We’ve identified and fixed an issue which has caused Pull Request builds to be delayed for approximately 20 minutes. Pull request builds should be now processing normally. Thank you!

Last Update: A few months ago

Missing build requests for Pull Requests

Mar 13, 20:47 UTC Resolved - The rollback has successfully resolved the issue. The underlying issue is with some dependency version conflicts that weren't caught before and we're working on resolving that conflict, but it is not needed to help resolve this issue. Please email support@travis-ci.com if you see any further issues. Thanks for your patience while we resolved this issue.Mar 13, 20:01 UTC Identified - We've identified an issue with a deploy, that was resulting in some PR builds, such as ones with merge conflicts, not running. We've rolled back this release and confirmed that new builds with merge conflicts will run. You'll need to open/close existing PRs **or** push a new commit if open/close does not work. We're still working on a fix for the issue. Thanks for your patience.Mar 13, 19:35 UTC Investigating - We've found and are currently investigating reports of missing builds for pull request events

Last Update: A few months ago

Missing build requests for Pull Requests

Mar 13, 20:01 UTC Identified - We've identified an issue with a deploy, that was resulting in some PR builds, such as ones with merge conflicts, not running. We've rolled back this release and confirmed that new builds with merge conflicts will run. You'll need to open/close existing PRs **or** push a new commit if open/close does not work. We're still working on a fix for the issue. Thanks for your patience.Mar 13, 19:35 UTC Investigating - We've found and are currently investigating reports of missing builds for pull request events

Last Update: A few months ago

Missing build requests for Pull Requests

Mar 13, 19:35 UTC Investigating - We've found and are currently investigating reports of missing builds for pull request events

Last Update: A few months ago

Missing build requests for Pull Requests

Mar 13, 19:35 UTC Investigating - We've found and are currently investigating reports of missing builds for pull request events

Last Update: A few months ago

Log delays on travis-ci.com

Mar 13, 15:24 UTC Resolved - We've processed the backlog and log messages are appearing in real-time again.Mar 13, 15:00 UTC Investigating - We're investigating delays in log messages for builds on travis-ci.com. Builds will run normally and get marked as finished, but the logs may take a few minutes to appear after the build is finished.

Last Update: A few months ago

Log delays on travis-ci.com

Mar 13, 15:00 UTC Investigating - We're investigating delays in log messages for builds on travis-ci.com. Builds will run normally and get marked as finished, but the logs may take a few minutes to appear after the build is finished.

Last Update: A few months ago

Missing build requests for Pull Request events

Mar 10, 18:43 UTC Resolved - The issue has been resolved.Mar 10, 18:27 UTC Update - We have a report of close/reopen not triggering a PR build. In this case, try pushing a new commit (or force on top).Mar 10, 18:03 UTC Monitoring - We have deployed the fix for the upstream API change. As previous PR build requests are lost, please close and reopen the affected PRs to trigger builds.Mar 10, 17:52 UTC Identified - We have traced the missing builds issue to a change in the upstream GitHub API that introduced new PR statuses we have not previously accounted for in our GitHub API client. This caused PR requests to be dropped. We are working on a fix to account for the new statuses.Mar 10, 16:21 UTC Investigating - We're currently investigating reports of missing builds for Pull Requests in https://travis-ci.com and https://travis-ci.org

Last Update: A few months ago

Missing build requests for Pull Request events

Mar 10, 18:27 UTC Update - We have a report of close/reopen not triggering a PR build. In this case, try pushing a new commit (or force on top).Mar 10, 18:03 UTC Monitoring - We have deployed the fix for the upstream API change. As previous PR build requests are lost, please close and reopen the affected PRs to trigger builds.Mar 10, 17:52 UTC Identified - We have traced the missing builds issue to a change in the upstream GitHub API that introduced new PR statuses we have not previously accounted for in our GitHub API client. This caused PR requests to be dropped. We are working on a fix to account for the new statuses.Mar 10, 16:21 UTC Investigating - We're currently investigating reports of missing builds for Pull Requests in https://travis-ci.com and https://travis-ci.org

Last Update: A few months ago

Missing build requests for Pull Request events

Mar 10, 18:03 UTC Monitoring - We have deployed the fix for the upstream API change. As previous PR build requests are lost, please close and reopen the affected PRs to trigger builds.Mar 10, 17:52 UTC Identified - We have traced the missing builds issue to a change in the upstream GitHub API that introduced new PR statuses we have not previously accounted for in our GitHub API client. This caused PR requests to be dropped. We are working on a fix to account for the new statuses.Mar 10, 16:21 UTC Investigating - We're currently investigating reports of missing builds for Pull Requests in https://travis-ci.com and https://travis-ci.org

Last Update: A few months ago

Missing build requests for Pull Request events

Mar 10, 17:52 UTC Identified - We have traced the missing builds issue to a change in the upstream GitHub API that introduced new PR statuses we have not previously accounted for in our GitHub API client. This caused PR requests to be dropped. We are working on a fix to account for the new statuses.Mar 10, 16:21 UTC Investigating - We're currently investigating reports of missing builds for Pull Requests in https://travis-ci.com and https://travis-ci.org

Last Update: A few months ago

Missing build requests for Pull Request events

Mar 10, 16:21 UTC Investigating - We're currently investigating reports of missing builds for Pull Requests in https://travis-ci.com and https://travis-ci.org

Last Update: A few months ago

Missing private build requests for Pull Request events

Mar 10, 16:21 UTC Investigating - We're currently investigating reports of missing builds for Pull Requests in https://travis-ci.com

Last Update: A few months ago

Network maintenance on Mac infrastructure

Mar 8, 10:18 UTC Completed - The maintenance completed successfully.Mar 8, 09:00 UTC In progress - We've started the maintenance.Mar 7, 18:23 UTC Scheduled - We will be performing some network maintenance on our Mac infrastructure to add capacity to our network stack. We are performing the maintenance in stages and the components involved all have failovers, so we don't expect any visible downtime or user impact during the maintenance.

Last Update: A few months ago

Network maintenance on Mac infrastructure

Mar 8, 09:00 UTC In progress - We've started the maintenance.Mar 7, 18:23 UTC Scheduled - We will be performing some network maintenance on our Mac infrastructure to add capacity to our network stack. We are performing the maintenance in stages and the components involved all have failovers, so we don't expect any visible downtime or user impact during the maintenance.

Last Update: A few months ago

Reduced capacity for private repo Mac Builds

Mar 7, 21:45 UTC Resolved - We have finished bringing additional capacity for private macOS builds online and caught up on existing backlog. Everything is operating as expected.Mar 7, 19:02 UTC Identified - Mac Builds for private repositories are currently backlogged while we bring additional capacity online.

Last Update: A few months ago

Reduced capacity for private repo Mac Builds

Mar 7, 19:02 UTC Identified - Mac Builds for private repositories are currently backlogged while we bring additional capacity online.

Last Update: A few months ago

Network maintenance on Mac infrastructure

Mar 7, 18:23 UTC Scheduled - We will be performing some network maintenance on our Mac infrastructure to add capacity to our network stack. We are performing the maintenance in stages and the components involved all have failovers, so we don't expect any visible downtime or user impact during the maintenance.

Last Update: A few months ago

Database maintenance for private repo logs

Mar 5, 03:26 UTC Completed - The scheduled maintenance has been completed.Mar 5, 03:18 UTC Verifying - Verification is currently underway for the maintenance items.Mar 5, 03:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 3, 21:49 UTC Scheduled - We need to promote the replica database used for job logs on private repositories. Our infrastructure provider has advised us to migrate to a newer instance within 30 days in order to ensure stability and avoid issues like we recently encountered (See: https://www.traviscistatus.com/incidents/hx7cnxbch9xf).

Last Update: A few months ago

Database maintenance for private repo logs

Mar 5, 03:18 UTC Verifying - Verification is currently underway for the maintenance items.Mar 5, 03:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 3, 21:49 UTC Scheduled - We need to promote the replica database used for job logs on private repositories. Our infrastructure provider has advised us to migrate to a newer instance within 30 days in order to ensure stability and avoid issues like we recently encountered (See: https://www.traviscistatus.com/incidents/hx7cnxbch9xf).

Last Update: A few months ago

Database maintenance for private repo logs

Mar 5, 03:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Mar 3, 21:49 UTC Scheduled - We need to promote the replica database used for job logs on private repositories. Our infrastructure provider has advised us to migrate to a newer instance within 30 days in order to ensure stability and avoid issues like we recently encountered (See: https://www.traviscistatus.com/incidents/hx7cnxbch9xf).

Last Update: A few months ago

Database maintenance for private repo logs

Mar 3, 21:49 UTC Scheduled - We need to promote the replica database used for job logs on private repositories. Our infrastructure provider has advised us to migrate to a newer instance within 30 days in order to ensure stability and avoid issues like we recently encountered (See: https://www.traviscistatus.com/incidents/hx7cnxbch9xf).

Last Update: A few months ago

Partial API/logs service outage for travis-ci.org

Mar 2, 09:49 UTC Resolved - We've been monitoring our system for a number of hours and things are now stable. Thanks again for your patience over the last few days.Mar 2, 04:28 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 2, 02:36 UTC Update - We've resumed all build processing at this point. Builds are starting and running as expected. Logs display via the API and web UI is functional as well. We will be monitoring things closely for the next few hours and into tomorrow. Thank you to everyone for your patience, understanding, and the many kind words via Twitter.Mar 2, 02:20 UTC Update - The database work is done. We are in the process of resuming services and beginning to process jobs again. We're still verifying things and will post another update once we're confident jobs should be being processed as expected.Mar 2, 01:48 UTC Update - Our database provider has asked to make some changes to the existing primary logs DB that require we stop processing new jobs temporarily. So all builds will be paused and logs display will result in an error from the API or web UI. We'll post an update once we've resumed builds.Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API/logs service outage for travis-ci.org

Mar 2, 04:28 UTC Monitoring - A fix has been implemented and we are monitoring the results.Mar 2, 02:36 UTC Update - We've resumed all build processing at this point. Builds are starting and running as expected. Logs display via the API and web UI is functional as well. We will be monitoring things closely for the next few hours and into tomorrow. Thank you to everyone for your patience, understanding, and the many kind words via Twitter.Mar 2, 02:20 UTC Update - The database work is done. We are in the process of resuming services and beginning to process jobs again. We're still verifying things and will post another update once we're confident jobs should be being processed as expected.Mar 2, 01:48 UTC Update - Our database provider has asked to make some changes to the existing primary logs DB that require we stop processing new jobs temporarily. So all builds will be paused and logs display will result in an error from the API or web UI. We'll post an update once we've resumed builds.Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API/logs service outage for travis-ci.org

Mar 2, 02:36 UTC Update - We've resumed all build processing at this point. Builds are starting and running as expected. Logs display via the API and web UI is functional as well. We will be monitoring things closely for the next few hours and into tomorrow. Thank you to everyone for your patience, understanding, and the many kind words via Twitter.Mar 2, 02:20 UTC Update - The database work is done. We are in the process of resuming services and beginning to process jobs again. We're still verifying things and will post another update once we're confident jobs should be being processed as expected.Mar 2, 01:48 UTC Update - Our database provider has asked to make some changes to the existing primary logs DB that require we stop processing new jobs temporarily. So all builds will be paused and logs display will result in an error from the API or web UI. We'll post an update once we've resumed builds.Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 2, 02:36 UTC Update - We've resumed all build processing at this point. Builds are starting and running as expected. Logs display via the API and web UI is functional as well. We will be monitoring things closely for the next few hours and into tomorrow. Thank you to everyone for your patience, understanding, and the many kind words via Twitter.Mar 2, 02:20 UTC Update - The database work is done. We are in the process of resuming services and beginning to process jobs again. We're still verifying things and will post another update once we're confident jobs should be being processed as expected.Mar 2, 01:48 UTC Update - Our database provider has asked to make some changes to the existing primary logs DB that require we stop processing new jobs temporarily. So all builds will be paused and logs display will result in an error from the API or web UI. We'll post an update once we've resumed builds.Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 2, 02:20 UTC Update - The database work is done. We are in the process of resuming services and beginning to process jobs again. We're still verifying things and will post another update once we're confident jobs should be being processed as expected.Mar 2, 01:48 UTC Update - Our database provider has asked to make some changes to the existing primary logs DB that require we stop processing new jobs temporarily. So all builds will be paused and logs display will result in an error from the API or web UI. We'll post an update once we've resumed builds.Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 2, 01:48 UTC Update - Our database provider has asked to make some changes to the existing primary logs DB that require we stop processing new jobs temporarily. So all builds will be paused and logs display will result in an error from the API or web UI. We'll post an update once we've resumed builds.Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 2, 01:07 UTC Update - We are currently waiting on a new replica logs database to finish provisioning and we plan to fail over to it once it is ready, which we expect to happen roughly 5 hours. Until then delays in log displays and some errors from the API/web UI should be expected. We are sorry for the extended length of this issue and appreciate your patience while we work through this issue with our database infrastructure provider.Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 21:41 UTC Update - We are still working on a fix with our infrastructure provider.Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 20:14 UTC Update - We're currently mostly stable, and we're actively working with our infrastructure provider on a more complete fix. Thanks for hanging in there with us!Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 15:52 UTC Update - We have found a way to mitigate our degraded API performance in the short term. We continue to monitor performance and wait for the emergency failover database to provision. We are still experiencing a delay of logs in our web front end and will report back as soon as we can.Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 14:48 UTC Update - Our ongoing database connection issues are due to emergency maintenance following the recent AWS outage. We are working with our upstream provider to rectify a kernel bug and are currently waiting for a new database failover to be provisioned. We expect this to take some time, and will continue to post updates as we have them.Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 11:53 UTC Identified - We have traced the partial outage to an intermittent database connection issue, and we're working to resolve it.Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage on travis-ci.org, which is affecting performance of our web front end.

Last Update: A few months ago

Partial API outage

Mar 1, 09:16 UTC Investigating - We are experiencing a partial API outage, which is affecting performance of our web front end.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Mar 1, 03:40 UTC Resolved - Jobs are processing normally. Thank you for your patience.Mar 1, 02:12 UTC Update - We are processing normally, though there is still a job processing backlog. We are monitoring stability closely while the backlog clears.Mar 1, 00:43 UTC Monitoring - We are currently processing a large job backlog as part of fallout from the S3 incident. All services are functioning normally, but you may still notice delays until the backlogs clear.Feb 28, 23:14 UTC Update - We are seeing some services recover after the S3 outage, and jobs are processing. We are watching recovery closely, and will keep updating if anything goes awry.Feb 28, 21:11 UTC Update - We are still waiting for Amazon S3 to recover.Feb 28, 18:10 UTC Identified - AWS has confirmed that S3 is experiencing issues. We've taken some actions to maintain current container-based capacity and are monitoring the S3 status and overall healthy of our infrastructure closely.Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Mar 1, 02:12 UTC Update - We are processing normally, though there is still a job processing backlog. We are monitoring stability closely while the backlog clears.Mar 1, 00:43 UTC Monitoring - We are currently processing a large job backlog as part of fallout from the S3 incident. All services are functioning normally, but you may still notice delays until the backlogs clear.Feb 28, 23:14 UTC Update - We are seeing some services recover after the S3 outage, and jobs are processing. We are watching recovery closely, and will keep updating if anything goes awry.Feb 28, 21:11 UTC Update - We are still waiting for Amazon S3 to recover.Feb 28, 18:10 UTC Identified - AWS has confirmed that S3 is experiencing issues. We've taken some actions to maintain current container-based capacity and are monitoring the S3 status and overall healthy of our infrastructure closely.Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Mar 1, 00:43 UTC Monitoring - We are currently processing a large job backlog as part of fallout from the S3 incident. All services are functioning normally, but you may still notice delays until the backlogs clear.Feb 28, 23:14 UTC Update - We are seeing some services recover after the S3 outage, and jobs are processing. We are watching recovery closely, and will keep updating if anything goes awry.Feb 28, 21:11 UTC Update - We are still waiting for Amazon S3 to recover.Feb 28, 18:10 UTC Identified - AWS has confirmed that S3 is experiencing issues. We've taken some actions to maintain current container-based capacity and are monitoring the S3 status and overall healthy of our infrastructure closely.Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Feb 28, 23:14 UTC Update - We are seeing some services recover after the S3 outage, and jobs are processing. We are watching recovery closely, and will keep updating if anything goes awry.Feb 28, 21:11 UTC Update - We are still waiting for Amazon S3 to recover.Feb 28, 18:10 UTC Identified - AWS has confirmed that S3 is experiencing issues. We've taken some actions to maintain current container-based capacity and are monitoring the S3 status and overall healthy of our infrastructure closely.Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Feb 28, 21:11 UTC Update - We are still waiting for Amazon S3 to recover.Feb 28, 18:10 UTC Identified - AWS has confirmed that S3 is experiencing issues. We've taken some actions to maintain current container-based capacity and are monitoring the S3 status and overall healthy of our infrastructure closely.Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Feb 28, 18:10 UTC Identified - AWS has confirmed that S3 is experiencing issues. We've taken some actions to maintain current container-based capacity and are monitoring the S3 status and overall healthy of our infrastructure closely.Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Issues related to the S3 outage in AWS.

Feb 28, 18:00 UTC Investigating - We are investigating issues related to the S3 outage in AWS. Currently build logs older than a few hours will fail to load. Build caches for container-based builds are unavailable. Builds that depend on resources like Docker Hub, Quay.io, or other S3 dependent third party services will fail with errors related to being unable to access the resources.

Last Update: A few months ago

Trusty Linux builds fail with `apt-get update` and 404

Feb 18, 05:39 UTC Resolved - A fix has been deployed for `language: csharp` builds. Please let us know if it still doesn't work for you. We would be happy to have a look. Thank you for your patience and happy building!Feb 17, 23:11 UTC Update - Most builds are now fixed. Only builds `language: csharp` remain. The fix should be released to production soon. Thank you for your patience.Feb 17, 15:39 UTC Identified - We are currently working on a resolution for issues with Trusty Linux builds failing with `apt-get update` and 404. This failure is due to an upstream PPA layout change. We'll be posting updates to https://github.com/travis-ci/travis-ci/issues/7332

Last Update: A few months ago

Trusty Linux builds fail with `apt-get update` and 404

Feb 17, 23:11 UTC Update - Most builds are now fixed. Only builds `language: csharp` remain. The fix should be released to production soon. Thank you for your patience.Feb 17, 15:39 UTC Identified - We are currently working on a resolution for issues with Trusty Linux builds failing with `apt-get update` and 404. This failure is due to an upstream PPA layout change. We'll be posting updates to https://github.com/travis-ci/travis-ci/issues/7332

Last Update: A few months ago

Emergency maintenance for .com logs database

Feb 17, 17:46 UTC Resolved - Maintenance work and verification have been completed.Feb 17, 15:22 UTC Monitoring - At this time we've completed verification and the maintenance work is completed. We'll continue to monitor things closely for another 60 minutes, then resolve the issue if nothing is wrong.Feb 17, 15:15 UTC Update - We are continuing to do verification at this time. New logs should be visible in the web UI and cli. Recently finished job logs may be delayed.Feb 17, 15:03 UTC Update - The database changes are completed. We are beginning the process of testing things and verifications to see if we're ready to exit maintenance mode. We'll provide another update in 15 minutes.Feb 17, 14:47 UTC Update - This incident originally mentions the .org logs database, but this maintenance is for the .com logs databases. Apologies for any confusion.Feb 17, 14:38 UTC Identified - We've identified that we need to take emergency maintenance for the .com logs database in order to increase the available resources of this database and ensure overall stability of the logs system. Expected User Impact: Delays in: - Seeing log updates in the web interface - Delays in viewing logs for recently finished jobs. Our initial expectation is for approximately. 60 minutes in this state. We'll provide an update at least 15 minutes after we begin the needed changes.

Last Update: A few months ago

Trusty Linux builds fail with `apt-get update` and 404

Feb 17, 15:39 UTC Identified - We are currently working on a resolution for issues with Trusty Linux builds failing with `apt-get update` and 404. This failure is due to an upstream PPA layout change. We'll be posting updates to https://github.com/travis-ci/travis-ci/issues/7332

Last Update: A few months ago

Trusty Linux builds fail with `apt-get update` and 404

Feb 17, 15:39 UTC Identified - We are currently working on a resolution for issues with Trusty Linux builds failing with `apt-get update` and 404. This failure is due to an upstream PPA layout change. We'll be posting updates to https://github.com/travis-ci/travis-ci/issues/7332

Last Update: A few months ago

Emergency maintenance for .com logs database

Feb 17, 15:22 UTC Monitoring - At this time we've completed verification and the maintenance work is completed. We'll continue to monitor things closely for another 60 minutes, then resolve the issue if nothing is wrong.Feb 17, 15:15 UTC Update - We are continuing to do verification at this time. New logs should be visible in the web UI and cli. Recently finished job logs may be delayed.Feb 17, 15:03 UTC Update - The database changes are completed. We are beginning the process of testing things and verifications to see if we're ready to exit maintenance mode. We'll provide another update in 15 minutes.Feb 17, 14:47 UTC Update - This incident originally mentions the .org logs database, but this maintenance is for the .com logs databases. Apologies for any confusion.Feb 17, 14:38 UTC Identified - We've identified that we need to take emergency maintenance for the .com logs database in order to increase the available resources of this database and ensure overall stability of the logs system. Expected User Impact: Delays in: - Seeing log updates in the web interface - Delays in viewing logs for recently finished jobs. Our initial expectation is for approximately. 60 minutes in this state. We'll provide an update at least 15 minutes after we begin the needed changes.

Last Update: A few months ago

Emergency maintenance for .com logs database

Feb 17, 15:15 UTC Update - We are continuing to do verification at this time. New logs should be visible in the web UI and cli. Recently finished job logs may be delayed.Feb 17, 15:03 UTC Update - The database changes are completed. We are beginning the process of testing things and verifications to see if we're ready to exit maintenance mode. We'll provide another update in 15 minutes.Feb 17, 14:47 UTC Update - This incident originally mentions the .org logs database, but this maintenance is for the .com logs databases. Apologies for any confusion.Feb 17, 14:38 UTC Identified - We've identified that we need to take emergency maintenance for the .com logs database in order to increase the available resources of this database and ensure overall stability of the logs system. Expected User Impact: Delays in: - Seeing log updates in the web interface - Delays in viewing logs for recently finished jobs. Our initial expectation is for approximately. 60 minutes in this state. We'll provide an update at least 15 minutes after we begin the needed changes.

Last Update: A few months ago

Emergency maintenance for .com logs database

Feb 17, 15:03 UTC Update - The database changes are completed. We are beginning the process of testing things and verifications to see if we're ready to exit maintenance mode. We'll provide another update in 15 minutes.Feb 17, 14:47 UTC Update - This incident originally mentions the .org logs database, but this maintenance is for the .com logs databases. Apologies for any confusion.Feb 17, 14:38 UTC Identified - We've identified that we need to take emergency maintenance for the .com logs database in order to increase the available resources of this database and ensure overall stability of the logs system. Expected User Impact: Delays in: - Seeing log updates in the web interface - Delays in viewing logs for recently finished jobs. Our initial expectation is for approximately. 60 minutes in this state. We'll provide an update at least 15 minutes after we begin the needed changes.

Last Update: A few months ago

Emergency maintenance for .com logs database

Feb 17, 14:47 UTC Update - This incident originally mentions the .org logs database, but this maintenance is for the .com logs databases. Apologies for any confusion.Feb 17, 14:38 UTC Identified - We've identified that we need to take emergency maintenance for the .com logs database in order to increase the available resources of this database and ensure overall stability of the logs system. Expected User Impact: Delays in: - Seeing log updates in the web interface - Delays in viewing logs for recently finished jobs. Our initial expectation is for approximately. 60 minutes in this state. We'll provide an update at least 15 minutes after we begin the needed changes.

Last Update: A few months ago

Emergency maintenance for .org logs database

Feb 17, 14:38 UTC Identified - We've identified that we need to take emergency maintenance for the .org logs database in order to increase the available resources of this database and ensure overall stability of the logs system. Expected User Impact: Delays in: - Seeing log updates in the web interface - Delays in viewing logs for recently finished jobs. Our initial expectation is for approximately. 60 minutes in this state. We'll provide an update at least 15 minutes after we begin the needed changes.

Last Update: A few months ago

Brief delays in log processing

Feb 16, 16:42 UTC Resolved - Processing of build logs has been delayed by a short period of time over the last 30 minutes. The queues have now cleared, and logs should be processing as expected.

Last Update: A few months ago

Build log delay on travis-ci.com

Feb 10, 16:07 UTC Resolved - Log processing has recovered and logs are showing normally again.Feb 10, 15:58 UTC Investigating - We're investigating delays in build logs for travis-ci.com. Builds should be marked as finished on time, but the logs may take longer than normal to show up.

Last Update: A few months ago

Build log delay on travis-ci.com

Feb 10, 15:58 UTC Investigating - We're investigating delays in build logs for travis-ci.com. Builds should be marked as finished on time, but the logs may take longer than normal to show up.

Last Update: A few months ago

Delays for new builds on travis-ci.com and travis-ci.org

Feb 10, 10:12 UTC Resolved - Incoming builds are now processing normally again.Feb 10, 10:05 UTC Update - We're seeing similar delays for travis-ci.org.Feb 10, 09:52 UTC Investigating - We're investigating delays in processing incoming builds for travis-ci.com. New commits may take longer than normal to appear on Travis CI, but existing builds should run normally.

Last Update: A few months ago

Delays for new builds on travis-ci.com and travis-ci.org

Feb 10, 10:05 UTC Update - We're seeing similar delays for travis-ci.org.Feb 10, 09:52 UTC Investigating - We're investigating delays in processing incoming builds for travis-ci.com. New commits may take longer than normal to appear on Travis CI, but existing builds should run normally.

Last Update: A few months ago

Delays for new builds on travis-ci.com

Feb 10, 09:52 UTC Investigating - We're investigating delays in processing incoming builds for travis-ci.com. New commits may take longer than normal to appear on Travis CI, but existing builds should run normally.

Last Update: A few months ago

Mac Builds Network Outage

Feb 9, 19:21 UTC Resolved - All MacOS builds have resumed and processing at full capacity. Seemed to have been an upstream provider hiccup. The backlog should be clearing momentarily for .com users.Feb 9, 19:07 UTC Investigating - We are currently investigating a network outage on our macOS infrastructure.

Last Update: A few months ago

Mac Builds Network Outage

Feb 9, 19:07 UTC Investigating - We are currently investigating a network outage on our macOS infrastructure.

Last Update: A few months ago

Log Processing Delay on travis-ci.com

Feb 8, 23:44 UTC Resolved - The backlog has drained and logs are being processed normally now. Thank you for your patience!Feb 8, 23:05 UTC Monitoring - We are currently experiencing delays while processing logs due to a larger than normal backlog.

Last Update: A few months ago

Log Processing Delay on travis-ci.com

Feb 8, 23:05 UTC Monitoring - We are currently experiencing delays while processing logs due to a larger than normal backlog.

Last Update: A few months ago

Delays with GitHub syncs on travis-ci.com

Feb 7, 11:38 UTC Resolved - GitHub syncs are now running normally. We had to clear out one of the queues to prevent a component from running out of memory, which means that some syncs that had been scheduled got cancelled. If you've added some new repositories in the past day or two that haven't showed up on Travis CI yet, you can click the "Sync account" button on https://travis-ci.com/profile to schedule a new syncFeb 7, 10:34 UTC Update - We're continuing to investigate delays in GitHub syncs. At the moment, only the automatic daily syncs are delayed, manual syncs are not delayed.Feb 7, 09:53 UTC Investigating - We're investigating delays with synchronizing user accounts with GitHub on travis-ci.com. New repositories may take longer than normal to show up.

Last Update: A few months ago

Delays with GitHub syncs on travis-ci.com

Feb 7, 10:34 UTC Update - We're continuing to investigate delays in GitHub syncs. At the moment, only the automatic daily syncs are delayed, manual syncs are not delayed.Feb 7, 09:53 UTC Investigating - We're investigating delays with synchronizing user accounts with GitHub on travis-ci.com. New repositories may take longer than normal to show up.

Last Update: A few months ago

Delays with GitHub syncs on travis-ci.com

Feb 7, 09:53 UTC Investigating - We're investigating delays with synchronizing user accounts with GitHub on travis-ci.com. New repositories may take longer than normal to show up.

Last Update: A few months ago

Container-based Linux Precise infrastructure emergency maintenance

Feb 6, 00:31 UTC Resolved - This incident has been resolved.Feb 6, 00:16 UTC Monitoring - The rollout is nearing completion with capacity surpassing demand. We don't expect any build start delays for the remainder of the maintenance.Feb 5, 21:56 UTC Identified - The container-based Linux infrastructure requires emergency maintenance to ensure all instances are running a known working version of the "worker" component. We had started rolling out a newer version this past Thursday, then began rolling back due to reports of mismatched exit code and job status. Please expect partial capacity behavior similar to times of high load during the maintenance, which we expect will take between 1-2 hours.

Last Update: A few months ago

Container-based Linux Precise infrastructure emergency maintenance

Feb 6, 00:16 UTC Monitoring - The rollout is nearing completion with capacity surpassing demand. We don't expect any build start delays for the remainder of the maintenance.Feb 5, 21:56 UTC Identified - The container-based Linux infrastructure requires emergency maintenance to ensure all instances are running a known working version of the "worker" component. We had started rolling out a newer version this past Thursday, then began rolling back due to reports of mismatched exit code and job status. Please expect partial capacity behavior similar to times of high load during the maintenance, which we expect will take between 1-2 hours.

Last Update: A few months ago

Container-based Linux Precise infrastructure emergency maintenance

Feb 5, 21:56 UTC Identified - The container-based Linux infrastructure requires emergency maintenance to ensure all instances are running a known working version of the "worker" component. We had started rolling out a newer version this past Thursday, then began rolling back due to reports of mismatched exit code and job status. Please expect partial capacity behavior similar to times of high load during the maintenance, which we expect will take between 1-2 hours.

Last Update: A few months ago

Container-based Linux infrastructure emergency maintenance

Feb 5, 21:56 UTC Identified - The container-based Linux infrastructure requires emergency maintenance to ensure all instances are running a known working version of the "worker" component. We had started rolling out a newer version this past Thursday, then began rolling back due to reports of mismatched exit code and job status. Please expect partial capacity behavior similar to times of high load during the maintenance, which we expect will take between 1-2 hours.

Last Update: A few months ago

Network switch on Mac infrastructure

Feb 3, 17:57 UTC Completed - We have completed the network switch and restored mac builds to full capacity!Feb 3, 16:56 UTC Update - We are increasing the RAM on our pfSense boxes and will perform a failover. Network connections may experience hiccups for a short period of time.Feb 3, 16:22 UTC Verifying - We have completed the switch and are monitoring closely.Feb 3, 16:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 3, 15:52 UTC Scheduled - In response to the major outage that we experienced this week (https://www.traviscistatus.com/incidents/k79mjcv403c4), we are performing a switch in our networking setup. In mitigating the issue we bypassed our pfSense boxes and instead let our Cisco ASA handle DHCP. However, this limits the IP pool to 256 IPs, which means we were required to reduce our capacity. We have now rebuilt the pfSense boxes and are bringing them back online. This will allow us to restore our service to the full capacity. While we are taking precautions to mitigate any issues during the switch, we may experience some small service disruptions on MacOS builds.

Last Update: A few months ago

Network switch on Mac infrastructure

Feb 3, 16:56 UTC Update - We are increasing the RAM on our pfSense boxes and will perform a failover. Network connections may experience hiccups for a short period of time.Feb 3, 16:22 UTC Verifying - We have completed the switch and are monitoring closely.Feb 3, 16:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 3, 15:52 UTC Scheduled - In response to the major outage that we experienced this week (https://www.traviscistatus.com/incidents/k79mjcv403c4), we are performing a switch in our networking setup. In mitigating the issue we bypassed our pfSense boxes and instead let our Cisco ASA handle DHCP. However, this limits the IP pool to 256 IPs, which means we were required to reduce our capacity. We have now rebuilt the pfSense boxes and are bringing them back online. This will allow us to restore our service to the full capacity. While we are taking precautions to mitigate any issues during the switch, we may experience some small service disruptions on MacOS builds.

Last Update: A few months ago

Network switch on Mac infrastructure

Feb 3, 16:22 UTC Verifying - We have completed the switch and are monitoring closely.Feb 3, 16:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 3, 15:52 UTC Scheduled - In response to the major outage that we experienced this week (https://www.traviscistatus.com/incidents/k79mjcv403c4), we are performing a switch in our networking setup. In mitigating the issue we bypassed our pfSense boxes and instead let our Cisco ASA handle DHCP. However, this limits the IP pool to 256 IPs, which means we were required to reduce our capacity. We have now rebuilt the pfSense boxes and are bringing them back online. This will allow us to restore our service to the full capacity. While we are taking precautions to mitigate any issues during the switch, we may experience some small service disruptions on MacOS builds.

Last Update: A few months ago

Network switch on Mac infrastructure

Feb 3, 16:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Feb 3, 15:52 UTC Scheduled - In response to the major outage that we experienced this week (https://www.traviscistatus.com/incidents/k79mjcv403c4), we are performing a switch in our networking setup. In mitigating the issue we bypassed our pfSense boxes and instead let our Cisco ASA handle DHCP. However, this limits the IP pool to 256 IPs, which means we were required to reduce our capacity. We have now rebuilt the pfSense boxes and are bringing them back online. This will allow us to restore our service to the full capacity. While we are taking precautions to mitigate any issues during the switch, we may experience some small service disruptions on MacOS builds.

Last Update: A few months ago

Network switch on Mac infrastructure

Feb 3, 15:52 UTC Scheduled - In response to the major outage that we experienced this week (https://www.traviscistatus.com/incidents/k79mjcv403c4), we are performing a switch in our networking setup. In mitigating the issue we bypassed our pfSense boxes and instead let our Cisco ASA handle DHCP. However, this limits the IP pool to 256 IPs, which means we were required to reduce our capacity. We have now rebuilt the pfSense boxes and are bringing them back online. This will allow us to restore our service to the full capacity. While we are taking precautions to mitigate any issues during the switch, we may experience some small service disruptions on MacOS builds.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 2, 01:35 UTC Resolved - The private repo backlog is clear. The public repo backlog continues to drop, which is typical for this day/hour. Thanks again for waiting! 👋❤️Feb 2, 01:18 UTC Update - The backlog for private repos is still dropping; now below 50. Thank you again for your patience!Feb 2, 00:13 UTC Update - The backlog for private repos is still dropping; now below 150. We will update again in an hour. Thank you for your patience! 💖Feb 1, 22:49 UTC Update - We're seeing the backlog level off during peak usage hours. We will continue to issue updates as we monitor backlog progress.Feb 1, 21:36 UTC Update - The private repo backlog has dropped steadily over the past hour, and we expect it will be caught up in less than 90 minutes. Thank you again for your patience!Feb 1, 20:38 UTC Update - Our Mac infrastructure is processing builds normally for both travis-ci.org and travis-ci.com albeit at a reduced capacity. We are working on fixing our DHCP issues to be able to restore the full capacity. We cannot thank you enough for your enduring patience.Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 2, 01:18 UTC Update - The backlog for private repos is still dropping; now below 50. Thank you again for your patience!Feb 2, 00:13 UTC Update - The backlog for private repos is still dropping; now below 150. We will update again in an hour. Thank you for your patience! 💖Feb 1, 22:49 UTC Update - We're seeing the backlog level off during peak usage hours. We will continue to issue updates as we monitor backlog progress.Feb 1, 21:36 UTC Update - The private repo backlog has dropped steadily over the past hour, and we expect it will be caught up in less than 90 minutes. Thank you again for your patience!Feb 1, 20:38 UTC Update - Our Mac infrastructure is processing builds normally for both travis-ci.org and travis-ci.com albeit at a reduced capacity. We are working on fixing our DHCP issues to be able to restore the full capacity. We cannot thank you enough for your enduring patience.Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 2, 00:13 UTC Update - The backlog for private repos is still dropping; now below 150. We will update again in an hour. Thank you for your patience! 💖Feb 1, 22:49 UTC Update - We're seeing the backlog level off during peak usage hours. We will continue to issue updates as we monitor backlog progress.Feb 1, 21:36 UTC Update - The private repo backlog has dropped steadily over the past hour, and we expect it will be caught up in less than 90 minutes. Thank you again for your patience!Feb 1, 20:38 UTC Update - Our Mac infrastructure is processing builds normally for both travis-ci.org and travis-ci.com albeit at a reduced capacity. We are working on fixing our DHCP issues to be able to restore the full capacity. We cannot thank you enough for your enduring patience.Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 22:49 UTC Update - We're seeing the backlog level off during peak usage hours. We will continue to issue updates as we monitor backlog progress.Feb 1, 21:36 UTC Update - The private repo backlog has dropped steadily over the past hour, and we expect it will be caught up in less than 90 minutes. Thank you again for your patience!Feb 1, 20:38 UTC Update - Our Mac infrastructure is processing builds normally for both travis-ci.org and travis-ci.com albeit at a reduced capacity. We are working on fixing our DHCP issues to be able to restore the full capacity. We cannot thank you enough for your enduring patience.Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 21:36 UTC Update - The private repo backlog has dropped steadily over the past hour, and we expect it will be caught up in less than 90 minutes. Thank you again for your patience!Feb 1, 20:38 UTC Update - Our Mac infrastructure is processing builds normally for both travis-ci.org and travis-ci.com albeit at a reduced capacity. We are working on fixing our DHCP issues to be able to restore the full capacity. We cannot thank you enough for your enduring patience.Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 20:38 UTC Update - Our Mac infrastructure is processing builds normally for both travis-ci.org and travis-ci.com albeit at a reduced capacity. We are working on fixing our DHCP issues to be able to restore the full capacity. We cannot thank you enough for your enduring patience.Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 19:33 UTC Update - We have increased capacity in production for both public and private repos. Due to ongoing issues with our DHCP setup, we have limited the cap to less than full capacity.Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 19:07 UTC Monitoring - We are now running at reduced job processing capacity in production for both public and private repos.Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 18:22 UTC Update - The patches we're testing need additional work. We expect production job capacity to come online in the next hour. Thank you for your patience through these multiple delays.Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 17:47 UTC Update - We are in the process of testing further patches to skip jobs older than 6 hours in order to help with the massive backlog. We expect to see jobs flowing again in production within the next 30 minutes.Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 16:45 UTC Update - We are on the verge of resuming to process Mac builds on travis-ci.org. Thank you for hanging in there with us.Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 16:09 UTC Update - We’ve begun performing the necessary networking changes and will begin testing them as soon as they’re completed. We appreciate your continued patience.Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 15:27 UTC Update - We have proceeded with limiting the maximum number of concurrent jobs on open source repositories with jobs on our Mac infrastructure. You can find more details about this setting here: https://docs.travis-ci.com/user/customizing-the-build#sts=Limiting-Concurrent-Builds. This change will help with the throughput of your Linux builds on other repositories while we are getting our Mac infrastructure back up. We will revert this change once things settle. Thank you for your understanding.Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us restarting Mac builds processing on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 14:33 UTC Update - We are continuing to work on fixing the connectivity issue preventing us to restart processing Mac builds on both travis-ci.com and travis-ci.org. Meanwhile, we are also working on putting stopgap measures via our software platform to prevent disruption of our Linux builds throughput. Thank you for your enduring patience.Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 14:07 UTC Update - We made the difficult decision to proceed with cancelling all pending Mac builds on travis-ci.org. Doing so should improve Linux builds throughput and it will hopefully help us get the Mac infrastructure back on its feet. We are sorry for this drastic measure.Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 11:03 UTC Update - We’re still attempting to resolve the connectivity issues. We appreciate your ongoing patience.Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 09:30 UTC Update - We’ve identified connectivity issues in our MacOS workers and we’re stopping all Mac builds to further investigate and fix them.Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 07:11 UTC Update - Restarting the platform did not resolve all issues, and we are continuing to dig into the sources of instability.Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup & emergency maintenance

Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup

Feb 1, 06:45 UTC Update - The virtualization platform has been fully restarted and we're now bringing job processing capacity back online.Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup

Feb 1, 04:51 UTC Update - The underlying VM infrastructure is still unstable, so we are coordinating with our infrastructure provider to perform a full restart. We will update again once we resume job processing.Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup

Feb 1, 02:52 UTC Identified - Some misbehaving hosts have been restarted thanks to help from our upstream provider. We are bringing job processing capacity back online.Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup

Feb 1, 02:27 UTC Update - We are stopping all job throughput to prevent runaway VM leakage while waiting for further insights from our upstream infrastructure provider.Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup

Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

MacOS queue backup

Feb 1, 01:58 UTC Investigating - MacOS queues for both public and private repos are backed up. We are working with our Mac infrastructure provider to identify contributing factors.

Last Update: A few months ago

Build processing delays

Jan 31, 02:03 UTC Resolved - All queues caught up. Thank you for waiting! 😎Jan 31, 01:47 UTC Update - Queue backlogs are clear with the exception of sudo-enabled private repos.Jan 31, 01:45 UTC Monitoring - The blockage in pre-queued jobs has cleared, and has been replaced by backlogs for sudo-enabled and container-based Linux with the exception of container-based Trusty: https://docs.travis-ci.com/user/trusty-ci-environment#Container-based-with-sudo%3A-false 😉Jan 31, 01:29 UTC Investigating - We are currently investigating delayed build creation on all build queues.

Last Update: A few months ago

Build processing delays

Jan 31, 01:47 UTC Update - Queue backlogs are clear with the exception of sudo-enabled private repos.Jan 31, 01:45 UTC Monitoring - The blockage in pre-queued jobs has cleared, and has been replaced by backlogs for sudo-enabled and container-based Linux with the exception of container-based Trusty: https://docs.travis-ci.com/user/trusty-ci-environment#Container-based-with-sudo%3A-false 😉Jan 31, 01:29 UTC Investigating - We are currently investigating delayed build creation on all build queues.

Last Update: A few months ago

Build processing delays

Jan 31, 01:45 UTC Monitoring - The blockage in pre-queued jobs has cleared, and has been replaced by backlogs for sudo-enabled and container-based Linux with the exception of container-based Trusty: https://docs.travis-ci.com/user/trusty-ci-environment#Container-based-with-sudo%3A-false 😉Jan 31, 01:29 UTC Investigating - We are currently investigating delayed build creation on all build queues.

Last Update: A few months ago

Build processing delays

Jan 31, 01:29 UTC Investigating - We are currently investigating delayed build creation on all build queues.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 31, 01:26 UTC Resolved - The private repo backlog has cleared.Jan 31, 00:28 UTC Update - The private repo backlog continues to drop. Thank you for your patience!Jan 30, 22:16 UTC Update - The public repo backlog has cleared, and we're continuing to work through the private repo backlog. Thanks for your patience! 💖 🐦Jan 30, 20:35 UTC Update - Job re-queues have dropped off, but backlogs remain for both public and private repos.Jan 30, 20:15 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 30, 20:13 UTC Update - Our upstream infrastructure provider has announced that the problem should now be mitigated. We expect another update by 20:40 UTC .Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 31, 00:28 UTC Update - The private repo backlog continues to drop. Thank you for your patience!Jan 30, 22:16 UTC Update - The public repo backlog has cleared, and we're continuing to work through the private repo backlog. Thanks for your patience! 💖 🐦Jan 30, 20:35 UTC Update - Job re-queues have dropped off, but backlogs remain for both public and private repos.Jan 30, 20:15 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 30, 20:13 UTC Update - Our upstream infrastructure provider has announced that the problem should now be mitigated. We expect another update by 20:40 UTC .Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 22:16 UTC Update - The public repo backlog has cleared, and we're continuing to work through the private repo backlog. Thanks for your patience! 💖 🐦Jan 30, 20:35 UTC Update - Job re-queues have dropped off, but backlogs remain for both public and private repos.Jan 30, 20:15 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 30, 20:13 UTC Update - Our upstream infrastructure provider has announced that the problem should now be mitigated. We expect another update by 20:40 UTC .Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 20:35 UTC Update - Job re-queues have dropped off, but backlogs remain for both public and private repos.Jan 30, 20:15 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 30, 20:13 UTC Update - Our upstream infrastructure provider has announced that the problem should now be mitigated. We expect another update by 20:40 UTC .Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 20:15 UTC Monitoring - A fix has been implemented and we are monitoring the results.Jan 30, 20:13 UTC Update - Our upstream infrastructure provider has announced that the problem should now be mitigated. We expect another update by 20:40 UTC .Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 20:13 UTC Update - Our upstream infrastructure provider has announced that the problem should now be mitigated. We expect another update by 20:40 UTC .Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 19:58 UTC Update - We expect an update from our upstream infrastructure provider by 20:10 UTC .Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 19:35 UTC Identified - We have been alerted to an ongoing incident with our upstream infrastructure provider. We will update when we know more. Thanks for your patience!Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled Linux

Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled Linux infrastructure.

Last Update: A few months ago

Delays on sudo-enabled open source Linux

Jan 30, 19:05 UTC Investigating - We are actively investigating heightened requeues on our sudo-enabled open source Linux infrastructure.

Last Update: A few months ago

travis api is down for .com builds

Jan 28, 01:15 UTC Resolved - api requests for .com builds processing normallyJan 28, 00:52 UTC Monitoring - travis api for .com host is up again, we are monitoring stability of requests.Jan 28, 00:04 UTC Update - Our host provider is experiencing degraded performance that is affecting our .com api hosts. They are working to resolve the upstream issue.Jan 27, 23:59 UTC Identified - Our hosts for travis api for .com builds have been responsive. We have contacted the provider to bring it back up again.

Last Update: A few months ago

travis api is down for .com builds

Jan 28, 00:52 UTC Monitoring - travis api for .com host is up again, we are monitoring stability of requests.Jan 28, 00:04 UTC Update - Our host provider is experiencing degraded performance that is affecting our .com api hosts. They are working to resolve the upstream issue.Jan 27, 23:59 UTC Identified - Our hosts for travis api for .com builds have been responsive. We have contacted the provider to bring it back up again.

Last Update: A few months ago

travis api is down for .com builds

Jan 28, 00:04 UTC Update - Our host provider is experiencing degraded performance that is affecting our .com api hosts. They are working to resolve the upstream issue.Jan 27, 23:59 UTC Identified - Our hosts for travis api for .com builds have been responsive. We have contacted the provider to bring it back up again.

Last Update: A few months ago

travis api is down for .com builds

Jan 27, 23:59 UTC Identified - Our hosts for travis api for .com builds have been responsive. We have contacted the provider to bring it back up again.

Last Update: A few months ago

Delayed response times from github

Jan 27, 19:02 UTC Resolved - All backlogs have cleared.Jan 27, 18:04 UTC Monitoring - Queues are processing normally, working through the backlog for sudo: required builds on .com. Monitoring further.Jan 27, 17:38 UTC Investigating - Connectivity to github is showing delayed response times, affecting all builds. We are invesigating further.

Last Update: A few months ago

Delayed response times from github

Jan 27, 18:04 UTC Monitoring - Queues are processing normally, working through the backlog for sudo: required builds on .com. Monitoring further.Jan 27, 17:38 UTC Investigating - Connectivity to github is showing delayed response times, affecting all builds. We are invesigating further.

Last Update: A few months ago

Delayed response times from github

Jan 27, 17:38 UTC Investigating - Connectivity to github is showing delayed response times, affecting all builds. We are invesigating further.

Last Update: A few months ago

RabbitMQ upgrade on travis-ci.org

Jan 25, 07:24 UTC Completed - The maintenance is now finished.Jan 25, 07:18 UTC Update - The maintenance is finished for the most part, we're just monitoring some final cleanup tasks.Jan 25, 03:59 UTC In progress - We've started the maintenance, and we'll keep this incident up to date as we progress through it.Jan 23, 22:20 UTC Update - We've rescheduled the maintenance window to 4 AM - 6 AM UTC on Wednesday January 25th.Jan 17, 22:19 UTC Scheduled - We're planning to upgrade our RabbitMQ instance on travis-ci.org on Wednesday January 25th at 2 PM UTC . We're not expecting any downtime apart from some very brief delays in builds starting, log messages appearing or job state updates (such as a job being marked as "booting", "passed", "failed", etc.) right as we're switching over to the new instance.

Last Update: A few months ago

RabbitMQ upgrade on travis-ci.org

Jan 25, 07:18 UTC Update - The maintenance is finished for the most part, we're just monitoring some final cleanup tasks.Jan 25, 03:59 UTC In progress - We've started the maintenance, and we'll keep this incident up to date as we progress through it.Jan 23, 22:20 UTC Update - We've rescheduled the maintenance window to 4 AM - 6 AM UTC on Wednesday January 25th.Jan 17, 22:19 UTC Scheduled - We're planning to upgrade our RabbitMQ instance on travis-ci.org on Wednesday January 25th at 2 PM UTC . We're not expecting any downtime apart from some very brief delays in builds starting, log messages appearing or job state updates (such as a job being marked as "booting", "passed", "failed", etc.) right as we're switching over to the new instance.

Last Update: A few months ago

RabbitMQ upgrade on travis-ci.org

Jan 25, 03:59 UTC In progress - We've started the maintenance, and we'll keep this incident up to date as we progress through it.Jan 23, 22:20 UTC Update - We've rescheduled the maintenance window to 4 AM - 6 AM UTC on Wednesday January 25th.Jan 17, 22:19 UTC Scheduled - We're planning to upgrade our RabbitMQ instance on travis-ci.org on Wednesday January 25th at 2 PM UTC . We're not expecting any downtime apart from some very brief delays in builds starting, log messages appearing or job state updates (such as a job being marked as "booting", "passed", "failed", etc.) right as we're switching over to the new instance.

Last Update: A few months ago

Delayed builds on OS X infrastructure for both public and private repositories

Jan 24, 19:46 UTC Resolved - This incident has been resolved.Jan 24, 19:05 UTC Update - OS X builds are running properly and we are resolving this incident. We are still seeing a backlog of public OS X builds but it's at a normal level for this time of the day. Thank you again for your enduring patience and happy building!Jan 24, 18:15 UTC Monitoring - We’ve finished rolling back to worker version 2.5.0. and builds running on both travis-ci.com and travis-ci.org are running at full capacity. Build delays are still expected for public repositories. We’re keeping an eye out to make sure.Jan 24, 16:44 UTC Identified - As part of scheduled improvements to our OS X infrastructure, we updated the worker version to 2.6.1. We’re currently rolling back to a previous worker version, due to several reports of instability

Last Update: A few months ago

Delayed builds on OS X infrastructure for both public and private repositories

Jan 24, 19:05 UTC Update - OS X builds are running properly and we are resolving this incident. We are still seeing a backlog of public OS X builds but it's at a normal level for this time of the day. Thank you again for your enduring patience and happy building!Jan 24, 18:15 UTC Monitoring - We’ve finished rolling back to worker version 2.5.0. and builds running on both travis-ci.com and travis-ci.org are running at full capacity. Build delays are still expected for public repositories. We’re keeping an eye out to make sure.Jan 24, 16:44 UTC Identified - As part of scheduled improvements to our OS X infrastructure, we updated the worker version to 2.6.1. We’re currently rolling back to a previous worker version, due to several reports of instability

Last Update: A few months ago

Build delays on sudo-enabled builds running on travis-ci.com

Jan 24, 18:55 UTC Resolved - Builds on sudo-enabled private repositories are now running normally. Thanks for your patience!Jan 24, 18:42 UTC Monitoring - The rollout of the new worker to our `sudo: required` infrastructure has been completed. We’re currently monitoring while builds are processing at full capacity.Jan 24, 16:50 UTC Identified - We are currently rolling out a new worker in our `sudo: required` infrastructure. This is causing build delays to private repositories.

Last Update: A few months ago

Build delays on sudo-enabled builds running on travis-ci.com

Jan 24, 18:42 UTC Monitoring - The rollout of the new worker to our `sudo: required` infrastructure has been completed. We’re currently monitoring while builds are processing at full capacity.Jan 24, 16:50 UTC Identified - We are currently rolling out a new worker in our `sudo: required` infrastructure. This is causing build delays to private repositories.

Last Update: A few months ago

Delayed builds on OS X infrastructure for both public and private repositories

Jan 24, 18:15 UTC Monitoring - We’ve finished rolling back to worker version 2.5.0. and builds running on both travis-ci.com and travis-ci.org are running at full capacity. Build delays are still expected for public repositories. We’re keeping an eye out to make sure.Jan 24, 16:44 UTC Identified - As part of scheduled improvements to our OS X infrastructure, we updated the worker version to 2.6.1. We’re currently rolling back to a previous worker version, due to several reports of instability

Last Update: A few months ago

Build delays on sudo-enabled builds running on travis-ci.com

Jan 24, 16:50 UTC Identified - We are currently rolling out a new worker in our `sudo: required` infrastructure. This is causing build delays to private repositories.

Last Update: A few months ago

Delayed builds on OS X infrastructure for both public and private repositories

Jan 24, 16:44 UTC Identified - As part of scheduled improvements to our OS X infrastructure, we updated the worker version to 2.6.1. We’re currently rolling back to a previous worker version, due to several reports of instability

Last Update: A few months ago

RabbitMQ upgrade on travis-ci.org

Jan 17, 22:19 UTC Scheduled - We're planning to upgrade our RabbitMQ instance on travis-ci.org on Wednesday January 25th at 2 PM UTC . We're not expecting any downtime apart from some very brief delays in builds starting, log messages appearing or job state updates (such as a job being marked as "booting", "passed", "failed", etc.) right as we're switching over to the new instance.

Last Update: A few months ago

Build failures on container-based builds on travis-ci.org

Jan 21, 05:20 UTC Resolved - We pushed a fix for this issue. If your build failed with "An error occurred while generating the build script" you can restart the build and it should run correctly. We're sorry for the interruption.Jan 21, 04:52 UTC Investigating - We're currently investigating build errors causing "An error occurred while generating the build script" errors for container-based builds on travis-ci.org.

Last Update: A few months ago

Build failures on container-based builds on travis-ci.org

Jan 21, 04:52 UTC Investigating - We're currently investigating build errors causing "An error occurred while generating the build script" errors for container-based builds on travis-ci.org.

Last Update: A few months ago

Build delays on sudo-enabled private and public repositories

Jan 19, 15:39 UTC Resolved - The backlog for .org sudo-required builds has cleared.Jan 19, 14:58 UTC Monitoring - We've restored settings to our instance cleanup for sudo-required builds on .org and .com. Builds have resumed full capacity. Thank you for your patience as we process the .org backlog.Jan 19, 14:26 UTC Identified - We've identified a redis configuration malfunction in our sudo-enabled infrastructure, causing instance cleanup errors. We are restoring the config settings and restarting the workers for .org and .com sudo-enabled builds.Jan 19, 14:17 UTC Investigating - We’re currently investigating an issue affecting `sudo: required` builds for both public and private repositories

Last Update: A few months ago

Build delays on sudo-enabled private and public repositories

Jan 19, 14:58 UTC Monitoring - We've restored settings to our instance cleanup for sudo-required builds on .org and .com. Builds have resumed full capacity. Thank you for your patience as we process the .org backlog.Jan 19, 14:26 UTC Identified - We've identified a redis configuration malfunction in our sudo-enabled infrastructure, causing instance cleanup errors. We are restoring the config settings and restarting the workers for .org and .com sudo-enabled builds.Jan 19, 14:17 UTC Investigating - We’re currently investigating an issue affecting `sudo: required` builds for both public and private repositories

Last Update: A few months ago

Build delays on sudo-enabled private and public repositories

Jan 19, 14:26 UTC Identified - We've identified a redis configuration malfunction in our sudo-enabled infrastructure, causing instance cleanup errors. We are restoring the config settings and restarting the workers for .org and .com sudo-enabled builds.Jan 19, 14:17 UTC Investigating - We’re currently investigating an issue affecting `sudo: required` builds for both public and private repositories

Last Update: A few months ago

Build delays on sudo-enabled private and public repositories

Jan 19, 14:17 UTC Investigating - We’re currently investigating an issue affecting `sudo: required` builds for both public and private repositories

Last Update: A few months ago

Build delays on sudo-enabled private and public repositories

Jan 19, 14:17 UTC Investigating - We’re currently investigating an issue affecting `sudo: required` builds for both public and private repositories

Last Update: A few months ago

Syncing user data with GitHub is delayed for some users on travis-ci.org and travis-ci.com

Jan 18, 13:11 UTC Resolved - Syncing user data with GitHub is now working properly. Thanks for your patience.Jan 18, 10:00 UTC Investigating - We're investigating the cause while the queue for syncs is clearing slowly.

Last Update: A few months ago

Slow build processing on .com and .org

Jan 18, 11:26 UTC Resolved - Everything operating normally.Jan 18, 10:23 UTC Update - Builds running on container-based infrastructure for both public and private projects are currently delayed. We’re working on processing the accumulated backlog as quickly as possible.Jan 18, 09:38 UTC Monitoring - Builds are processing normally now and we're monitoring the issue.Jan 18, 08:42 UTC Update - Builds are processing slowly on both travis-ci.com and travis-ci.org. We're investigating the issue.Jan 18, 08:17 UTC Investigating - We are experiencing delays on travis-ci.org in starting builds. We are investigating.

Last Update: A few months ago

Slow build processing on .com and .org

Jan 18, 10:23 UTC Update - Builds running on container-based infrastructure for both public and private projects are currently delayed. We’re working on processing the accumulated backlog as quickly as possible.Jan 18, 09:38 UTC Monitoring - Builds are processing normally now and we're monitoring the issue.Jan 18, 08:42 UTC Update - Builds are processing slowly on both travis-ci.com and travis-ci.org. We're investigating the issue.Jan 18, 08:17 UTC Investigating - We are experiencing delays on travis-ci.org in starting builds. We are investigating.

Last Update: A few months ago

Syncing user data with GitHub is delayed for some users on travis-ci.org and travis-ci.com

Jan 18, 10:00 UTC Investigating - We're investigating the cause while the queue for syncs is clearing slowly.

Last Update: A few months ago

Slow build processing on .com and .org

Jan 18, 09:38 UTC Monitoring - Builds are processing normally now and we're monitoring the issue.Jan 18, 08:42 UTC Update - Builds are processing slowly on both travis-ci.com and travis-ci.org. We're investigating the issue.Jan 18, 08:17 UTC Investigating - We are experiencing delays on travis-ci.org in starting builds. We are investigating.

Last Update: A few months ago

Slow build processing on .com and .org

Jan 18, 08:42 UTC Update - Builds are processing slowly on both travis-ci.com and travis-ci.org. We're investigating the issue.Jan 18, 08:17 UTC Investigating - We are experiencing delays on travis-ci.org in starting builds. We are investigating.

Last Update: A few months ago

Slow build processing on .com and .org

Jan 18, 08:42 UTC Update - Builds are processing slowly on both travis-ci.com and travis-ci.org. We're investigating the issue.Jan 18, 08:17 UTC Investigating - We are experiencing delays on travis-ci.org in starting builds. We are investigating.

Last Update: A few months ago

Delays with travis-ci.org builds starting

Jan 18, 08:17 UTC Investigating - We are experiencing delays on travis-ci.org in starting builds. We are investigating.

Last Update: A few months ago

RabbitMQ upgrade on travis-ci.org

Jan 17, 22:19 UTC Scheduled - We're planning to upgrade our RabbitMQ instance on travis-ci.org on Wednesday January 25th at 2 PM UTC . We're not expecting any downtime apart from some very brief delays in builds starting, log messages appearing or job state updates (such as a job being marked as "booting", "passed", "failed", etc.) right as we're switching over to the new instance.

Last Update: A few months ago

Build delays on sudo-enabled private builds

Jan 11, 20:40 UTC Resolved - Builds are stable since the last update hence we are resolving this incident. Thank you for your patience and happy building!Jan 11, 19:07 UTC Monitoring - We identified a capacity constraint that was causing this issue. Builds are now processing normally.Jan 11, 17:28 UTC Investigating - We’re looking into what’s causing build delays for private repositories using our `sudo: required` infrastructure.

Last Update: A few months ago

Build delays on sudo-enabled private builds

Jan 11, 19:07 UTC Monitoring - We identified a capacity constraint that was causing this issue. Builds are now processing normally.Jan 11, 17:28 UTC Investigating - We’re looking into what’s causing build delays for private repositories using our `sudo: required` infrastructure.

Last Update: A few months ago

Build delays on sudo-enabled private builds

Jan 11, 17:28 UTC Investigating - We’re looking into what’s causing build delays for private repositories using our `sudo: required` infrastructure.

Last Update: A few months ago

Builds delays on sudo-enabled private repositories

Jan 11, 00:40 UTC Resolved - We've cleared the backlog and are processing jobs as expected now. Thank you for patience while we worked through this issue.Jan 10, 20:27 UTC Monitoring - Cause of the delays has been found and removed. The backlog is currently being processed while we monitor the situation closely.Jan 10, 18:32 UTC Investigating - We’re investigating build delays in `sudo: required` private repositories. Will update when we have more information.

Last Update: A few months ago

Builds delays on sudo-enabled private repositories

Jan 10, 20:27 UTC Monitoring - Cause of the delays has been found and removed. The backlog is currently being processed while we monitor the situation closely.Jan 10, 18:32 UTC Investigating - We’re investigating build delays in `sudo: required` private repositories. Will update when we have more information.

Last Update: A few months ago

Builds delays on sudo-enabled private repositories

Jan 10, 18:32 UTC Investigating - We’re investigating build delays in `sudo: required` private repositories. Will update when we have more information.

Last Update: A few months ago

Boot errors for Xcode 8.2 macOS image

Dec 16, 01:23 UTC Resolved - The Xcode 8.2 image has been restored and is booting correctly.Dec 16, 01:11 UTC Investigating - We're investigating errors booting the Xcode 8.2 image on our macOS infrastructure. All other images are booting correctly.

Last Update: A few months ago

Boot errors for Xcode 8.2 macOS image

Dec 16, 01:11 UTC Investigating - We're investigating errors booting the Xcode 8.2 image on our macOS infrastructure. All other images are booting correctly.

Last Update: A few months ago

Intermittent slow networking performance

Dec 15, 20:53 UTC Resolved - We've had reports that from earlier affected projects that this issue is now fixed. Please contact us at support@travis-ci.com if you are still seeing this situation. We would be happy to have another look. Thank you for your patience and happy building! ✨Dec 15, 18:36 UTC Monitoring - We have identified a potentially affected component with our upstream service provider. We believe that the issue has been resolved. If your jobs were previously affected, please restart them. We continue to monitor the issue closely.Dec 15, 17:27 UTC Investigating - We have received a few reports of intermittent network failures. This can manifest in variety of ways, when network is involved. Symptoms include (but not limited to): failure to fetch APT whitelist, slow connections to NPM, slow dependency resolution with Bundler.

Last Update: A few months ago

Intermittent slow networking performance

Dec 15, 18:36 UTC Monitoring - We have identified a potentially affected component with our upstream service provider. We believe that the issue has been resolved. If your jobs were previously affected, please restart them. We continue to monitor the issue closely.Dec 15, 17:27 UTC Investigating - We have received a few reports of intermittent network failures. This can manifest in variety of ways, when network is involved. Symptoms include (but not limited to): failure to fetch APT whitelist, slow connections to NPM, slow dependency resolution with Bundler.

Last Update: A few months ago

Degraded realtime logging performance

Dec 15, 17:28 UTC Resolved - This incident has been resolved.Dec 15, 17:18 UTC Investigating - We are currently investigating performance issues could manifest in slow UI log updates.

Last Update: A few months ago

Intermittent slow networking performance

Dec 15, 17:27 UTC Investigating - We have received a few reports of intermittent network failures. This can manifest in variety of ways, when network is involved. Symptoms include (but not limited to): failure to fetch APT whitelist, slow connections to NPM, slow dependency resolution with Bundler.

Last Update: A few months ago

Degraded realtime logging performance

Dec 15, 17:18 UTC Investigating - We are currently investigating performance issues could manifest in slow UI log updates.

Last Update: A few months ago

Mac build outage

Dec 15, 04:00 UTC Resolved - This incident has been resolved.Dec 15, 03:34 UTC Monitoring - We opted to perform a full restart, which has now completed. We're bringing job capacity back online and watching closely.Dec 15, 02:42 UTC Investigating - We are investigating a critical component in our Mac infrastructure that is unhealthy, resulting in no jobs being processed.

Last Update: A few months ago

Mac build outage

Dec 15, 03:34 UTC Monitoring - We opted to perform a full restart, which has now completed. We're bringing job capacity back online and watching closely.Dec 15, 02:42 UTC Investigating - We are investigating a critical component in our Mac infrastructure that is unhealthy, resulting in no jobs being processed.

Last Update: A few months ago

Mac build outage

Dec 15, 02:42 UTC Investigating - We are investigating a critical component in our Mac infrastructure that is unhealthy, resulting in no jobs being processed.

Last Update: A few months ago

Boot errors on macOS infrastructure

Dec 14, 06:51 UTC Resolved - We've worked through the backlog on travis-ci.com and travis-ci.org and jobs are running normally.Dec 14, 06:42 UTC Update - We're back to running at full capacity.Dec 14, 06:29 UTC Monitoring - The issue with the virtualization infrastructure is fixed and we're working on bringing capacity back online.Dec 14, 05:29 UTC Identified - We've identified an issue with our virtualization infrastructure causing all macOS boots to fail and we're working with our infrastructure provider to restore service.Dec 14, 04:56 UTC Investigating - We're investigating reports of boot errors on our macOS infrastructure.

Last Update: A few months ago

Boot errors on macOS infrastructure

Dec 14, 06:42 UTC Update - We're back to running at full capacity.Dec 14, 06:29 UTC Monitoring - The issue with the virtualization infrastructure is fixed and we're working on bringing capacity back online.Dec 14, 05:29 UTC Identified - We've identified an issue with our virtualization infrastructure causing all macOS boots to fail and we're working with our infrastructure provider to restore service.Dec 14, 04:56 UTC Investigating - We're investigating reports of boot errors on our macOS infrastructure.

Last Update: A few months ago

Boot errors on macOS infrastructure

Dec 14, 06:29 UTC Monitoring - The issue with the virtualization infrastructure is fixed and we're working on bringing capacity back online.Dec 14, 05:29 UTC Identified - We've identified an issue with our virtualization infrastructure causing all macOS boots to fail and we're working with our infrastructure provider to restore service.Dec 14, 04:56 UTC Investigating - We're investigating reports of boot errors on our macOS infrastructure.

Last Update: A few months ago

Boot errors on macOS infrastructure

Dec 14, 05:29 UTC Identified - We've identified an issue with our virtualization infrastructure causing all macOS boots to fail and we're working with our infrastructure provider to restore service.Dec 14, 04:56 UTC Investigating - We're investigating reports of boot errors on our macOS infrastructure.

Last Update: A few months ago

Boot errors on macOS infrastructure

Dec 14, 04:56 UTC Investigating - We're investigating reports of boot errors on our macOS infrastructure.

Last Update: A few months ago

Node.js build failures

Dec 9, 18:18 UTC Resolved - Node.js builds are working again, as per https://twitter.com/nodejs/status/807284428774998018. Thanks for your patience!Dec 9, 17:02 UTC Monitoring - Node.js team reports that the issue should be resolved. We are monitoring the situation closely.Dec 9, 16:36 UTC Update - The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.Dec 9, 16:32 UTC Identified - We are experiencing Node.js build failures due to slow connection to https://nodejs.org.

Last Update: A few months ago

Node.js build failures

Dec 9, 17:02 UTC Monitoring - Node.js team reports that the issue should be resolved. We are monitoring the situation closely.Dec 9, 16:36 UTC Update - The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.Dec 9, 16:32 UTC Identified - We are experiencing Node.js build failures due to slow connection to https://nodejs.org.

Last Update: A few months ago

Node.js build failures

Dec 9, 16:36 UTC Update - The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.Dec 9, 16:32 UTC Identified - We are experiencing Node.js build failures due to slow connection to https://nodejs.org.

Last Update: A few months ago

Node.js build failures

Dec 9, 16:36 UTC Update - The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.Dec 9, 16:32 UTC Identified - //nodejs.org. The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.

Last Update: A few months ago

We are experiencing Node.js build failures due to slow connection to nodejs.org.

Dec 9, 16:36 UTC Update - The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.Dec 9, 16:32 UTC Identified - //nodejs.org. The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.

Last Update: A few months ago

We are experiencing Node.js build failures due to slow connection to nodejs.org. The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.

Dec 9, 16:32 UTC Identified - //nodejs.org. The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.

Last Update: A few months ago

We are experiencing Node.js build failures due to slow connection to http

Dec 9, 16:32 UTC Identified - //nodejs.org. The Node.js team has identified the issue and is working with the service provider to resolve the issue. See https://github.com/nodejs/build/issues/562 for updates.

Last Update: A few months ago

Sudo-enabled builds requeueing on travis-ci.com

Dec 8, 23:57 UTC Resolved - The backlog of sudo-enabled builds has cleared on travis-ci.com. We are sorry for the interruption.Dec 8, 23:25 UTC Monitoring - We have resolved the requeuing problem and are working through the accumulated backlog.Dec 8, 22:28 UTC Update - We are still resolving the job requeuing delays.Dec 8, 21:09 UTC Update - We are rolling out changes to our travis-ci.com sudo-enabled infrastructure to resolve stability issues. Expect some builds to be requeued, causing further delays.Dec 8, 19:49 UTC Identified - We currently working to resolve an issue with jobs being requeued on sudo-enabled builds on travis-ci.com.

Last Update: A few months ago

Sudo-enabled builds requeueing on travis-ci.com

Dec 8, 23:25 UTC Monitoring - We have resolved the requeuing problem and are working through the accumulated backlog.Dec 8, 22:28 UTC Update - We are still resolving the job requeuing delays.Dec 8, 21:09 UTC Update - We are rolling out changes to our travis-ci.com sudo-enabled infrastructure to resolve stability issues. Expect some builds to be requeued, causing further delays.Dec 8, 19:49 UTC Identified - We currently working to resolve an issue with jobs being requeued on sudo-enabled builds on travis-ci.com.

Last Update: A few months ago

Sudo-enabled builds requeueing on travis-ci.com

Dec 8, 22:28 UTC Update - We are still resolving the job requeuing delays.Dec 8, 21:09 UTC Update - We are rolling out changes to our travis-ci.com sudo-enabled infrastructure to resolve stability issues. Expect some builds to be requeued, causing further delays.Dec 8, 19:49 UTC Identified - We currently working to resolve an issue with jobs being requeued on sudo-enabled builds on travis-ci.com.

Last Update: A few months ago

Sudo-enabled builds requeueing on travis-ci.com

Dec 8, 21:09 UTC Update - We are rolling out changes to our travis-ci.com sudo-enabled infrastructure to resolve stability issues. Expect some builds to be requeued, causing further delays.Dec 8, 19:49 UTC Identified - We currently working to resolve an issue with jobs being requeued on sudo-enabled builds on travis-ci.com.

Last Update: A few months ago

Sudo-enabled builds requeueing on travis-ci.com

Dec 8, 19:49 UTC Identified - We currently working to resolve an issue with jobs being requeued on sudo-enabled builds on travis-ci.com.

Last Update: A few months ago

Delayed builds on all infrastructures because of a GitHub outage

Dec 6, 21:20 UTC Resolved - Builds are running smoothly on all infrastructures. Happy building!Dec 6, 20:55 UTC Monitoring - GitHub outage has been resolved. We are seeing builds being processed at a higher rate. We are continually monitoring the situation.Dec 6, 20:20 UTC Identified - Your builds will be delayed while we are waiting for GitHub to fix the current outage (see https://status.github.com/messages). Thank you for your patience and we will provide an update as soon as we know more.

Last Update: A few months ago

Delayed builds on all infrastructures because of a GitHub outage

Dec 6, 20:55 UTC Monitoring - GitHub outage has been resolved. We are seeing builds being processed at a higher rate. We are continually monitoring the situation.Dec 6, 20:20 UTC Identified - Your builds will be delayed while we are waiting for GitHub to fix the current outage (see https://status.github.com/messages). Thank you for your patience and we will provide an update as soon as we know more.

Last Update: A few months ago

Delayed builds on all infrastructures because of a GitHub outage

Dec 6, 20:20 UTC Identified - Your builds will be delayed while we are waiting for GitHub to fix the current outage (see https://status.github.com/messages). Thank you for your patience and we will provide an update as soon as we know more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 3, 05:31 UTC Resolved - The backlog for .org OS X builds has cleared. 👍🎉 Resolving incident.Dec 2, 18:18 UTC Update - The backlog on travis-ci.com has cleared, we're still monitoring the backlog on travis-ci.org.Dec 2, 17:11 UTC Update - We're continuing to process through the backlog without issues, but due to the size of the backlog there are still long delays for builds to run. We're continuously monitoring the situation in order to maintain the best possible throughput. We're truly sorry for this lengthy interruption to your builds and we will be publishing a postmortem next week.Dec 2, 12:03 UTC Update - We are beginning to process backlog at full capacity and continue to monitor closely.Dec 2, 11:29 UTC Monitoring - We are slowly starting up capacity and resuming builds.Dec 2, 11:14 UTC Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 18:18 UTC Update - The backlog on travis-ci.com has cleared, we're still monitoring the backlog on travis-ci.org.Dec 2, 17:11 UTC Update - We're continuing to process through the backlog without issues, but due to the size of the backlog there are still long delays for builds to run. We're continuously monitoring the situation in order to maintain the best possible throughput. We're truly sorry for this lengthy interruption to your builds and we will be publishing a postmortem next week.Dec 2, 12:03 UTC Update - We are beginning to process backlog at full capacity and continue to monitor closely.Dec 2, 11:29 UTC Monitoring - We are slowly starting up capacity and resuming builds.Dec 2, 11:14 UTC Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 17:11 UTC Update - We're continuing to process through the backlog without issues, but due to the size of the backlog there are still long delays for builds to run. We're continuously monitoring the situation in order to maintain the best possible throughput. We're truly sorry for this lengthy interruption to your builds and we will be publishing a postmortem next week.Dec 2, 12:03 UTC Update - We are beginning to process backlog at full capacity and continue to monitor closely.Dec 2, 11:29 UTC Monitoring - We are slowly starting up capacity and resuming builds.Dec 2, 11:14 UTC Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 12:03 UTC Update - We are beginning to process backlog at full capacity and continue to monitor closely.Dec 2, 11:29 UTC Monitoring - We are slowly starting up capacity and resuming builds.Dec 2, 11:14 UTC Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 11:29 UTC Monitoring - We are slowly starting up capacity and resuming builds.Dec 2, 11:14 UTC Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 11:14 UTC Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 10:48 UTC Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 09:46 UTC Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 06:46 UTC Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 06:09 UTC Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 05:04 UTC Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 04:45 UTC Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 04:27 UTC Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .orgDec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 03:21 UTC Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hostsDec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 02:40 UTC Update - OS X .org workers have restarted and resuming jobs at full capacity.Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Dec 2, 02:06 UTC Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 23:54 UTC Update - OS X .com backlog has cleared. Still processing open source OS X builds.Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 18:15 UTC Update - Resumed OS X builds at full capacity, monitoring performance.Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 17:37 UTC Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 16:45 UTC Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacityNov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 15:50 UTC Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .comNov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 06:12 UTC Update - Public repos are back to full capacity. We are working through the accumulated backlog.Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 04:21 UTC Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 03:28 UTC Update - We have returned to full capacity for private repos.Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 02:44 UTC Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 01:45 UTC Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 00:54 UTC Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 30, 00:37 UTC Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 20:04 UTC Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 19:40 UTC Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 17:49 UTC Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 17:23 UTC Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 17:02 UTC Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 16:39 UTC Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

Build delays in our OS X infrastructure

Nov 29, 16:30 UTC Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.

Last Update: A few months ago

OS X VMs cannot resolve github.com

Nov 24, 04:57 UTC Resolved - We've caught up on the backlog from this incident and are processing builds as expected now. We apologize for the delay in realizing this issue, this particular issue was not caught by our current monitoring for this piece of our infrastructure. We'll be reviewing system logs and identifying additional monitoring we can add to better detect this in the future. Thank you for your patience while we resolved this issue.Nov 24, 04:50 UTC Update - We are beginning to resume OS X VMs for both public and private builds.Nov 24, 04:41 UTC Update - We've restored the cluster to service and we're doing some additional checks on things before we resume builds again. Thank you for your patience.Nov 24, 04:35 UTC Identified - We've determined that one of our clustered VM firewalls was in an inconsistent state that was causing outbound network traffic to fail in an unexpected way that did not result in other parts of the cluster properly handling the traffic. We're working to restore this cluster to service.Nov 24, 04:22 UTC Investigating - We are currently investigating why our OS X VMs cannot resolve github.com. This is currently preventing `git clone` from working and makes OS X builds fail.

Last Update: A few months ago

OS X VMs cannot resolve github.com

Nov 24,