CloudStatus

Npmjs Status Alerts

npm
private package issues.

Dec 13, 18:32 UTC Resolved - This incident has been resolved.Dec 13, 02:18 UTC Update - things should be back to normal. we are monitoring private package access and billing. A few more cleanup actions will follow in the morning to ensure that accounts are as they should be.Dec 12, 23:44 UTC Monitoring - Billing information should now be correct for most customers. we are restoring some missing changes from within the last 24 hours.Dec 12, 23:21 UTC Update - The mitigation has been deployed and access to private packages has been restored. The website may still show that there is no payment information.Dec 12, 23:06 UTC Identified - the issue has been identified but we are still working on the mitigation.Dec 12, 22:40 UTC Update - we’re deploying a potential mitigation to the issueDec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
private package issues.

Dec 13, 02:18 UTC Update - things should be back to normal. we are monitoring private package access and billing. A few more cleanup actions will follow in the morning to ensure that accounts are as they should be.Dec 12, 23:44 UTC Monitoring - Billing information should now be correct for most customers. we are restoring some missing changes from within the last 24 hours.Dec 12, 23:21 UTC Update - The mitigation has been deployed and access to private packages has been restored. The website may still show that there is no payment information.Dec 12, 23:06 UTC Identified - the issue has been identified but we are still working on the mitigation.Dec 12, 22:40 UTC Update - we’re deploying a potential mitigation to the issueDec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
private package issues.

Dec 12, 23:44 UTC Monitoring - Billing information should now be correct for most customers. we are restoring some missing changes from within the last 24 hours.Dec 12, 23:21 UTC Update - The mitigation has been deployed and access to private packages has been restored. The website may still show that there is no payment information.Dec 12, 23:06 UTC Identified - the issue has been identified but we are still working on the mitigation.Dec 12, 22:40 UTC Update - we’re deploying a potential mitigation to the issueDec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
private package issues.

Dec 12, 23:21 UTC Update - The mitigation has been deployed and access to private packages has been restored. The website may still show that there is no payment information.Dec 12, 23:06 UTC Identified - the issue has been identified but we are still working on the mitigation.Dec 12, 22:40 UTC Update - we’re deploying a potential mitigation to the issueDec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
private package issues.

Dec 12, 23:06 UTC Identified - the issue has been identified but we are still working on the mitigation.Dec 12, 22:40 UTC Update - we’re deploying a potential mitigation to the issueDec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
private package issues.

Dec 12, 22:40 UTC Update - we’re deploying a potential mitigation to the issueDec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
private package issues.

Dec 12, 22:26 UTC Investigating - some customers are unable to access private packages. we are investigating.

npm
Increased rate of 503s on packages

Nov 8, 23:23 UTC Resolved - we were seeing an increased rate of 503s, due to a bug in our failover logic. This bug is being addressed to prevent these timeouts in the future.Nov 8, 19:17 UTC Investigating - We're currently seeing an increased rate of 503s accessing some packages

npm
503s on npmjs website and package pages

Oct 25, 21:23 UTC Resolved - This incident has been resolved.Oct 25, 20:54 UTC Monitoring - We're currently monitoring a spike in 503s on the npmjs.com website.

npm
Registry and Website reads

Oct 23, 00:57 UTC Resolved - This incident has been resolved.Oct 22, 23:40 UTC Investigating - Seeing an increased rate of 503s on website package pages as well as during installs in multiple regions

npm
increased rate of 503s in European and East Coast POPs

Oct 18, 21:39 UTC Resolved - The publishing and installation issues were related to upstream network connectivity issues with AWS.Oct 18, 21:16 UTC Investigating - we are seeing an increased rate of 503s in npm's European and East-Coast servers. The issue is affecting these users' ability to publish modules and to install scoped modules. Our ops team is currently investigating.

npm
503s on packages

Sep 25, 13:37 UTC Resolved - Between 11:10 UTC until 12:30 UTC today we saw an elevated rate of 503 on requests for packages and package data. It should now be resolved.

npm
Increased 503 Rate in SE Asia

Sep 22, 08:05 UTC Resolved - We are not seeing increased error rates anymore. We'll continue monitoring this incident internally.Sep 22, 06:48 UTC Monitoring - The error rate has decreased. We are monitoring and preparing to reroute if the error rate increases.Sep 22, 06:45 UTC Investigating - We're seeing a high error rate from our SE Asia point-of-presence, registry and website are affected. Investigating!

npm
503s for scoped packages

Sep 20, 21:24 UTC Resolved - This incident has been resolved.Sep 20, 20:03 UTC Monitoring - A fix has been implemented and we are monitoring the results.Sep 20, 16:24 UTC Identified - The issue has been identified and a fix is being implemented.Sep 20, 15:59 UTC Investigating - We are currently investigating an uptick in 503 responses for scoped package requests.

npm
500s on package servers and npmjs.com

Sep 6, 23:38 UTC Resolved - We've resolved an issue regarding 500s on package servers and npmjs.com. Package installs and npmjs.com page views should have returned to normal. We identified the culprit as a bot making an unusually high rate of requests on a route that was not cached. We've blocked the offending IP and intend to implement additional measures to prevent this from occurring again in the future.Sep 6, 21:36 UTC Investigating - We are currently investigating an issue where we are seeing intermittent 500s on package installs and pages from npmjs.com. Packages servers are experiencing delays, and you may notice website and installation failures.

npm
Delays in ACL & Payment Status changes

Aug 23, 03:46 UTC Resolved - The queue has caught up. Package ownership & scope payment changes should be reflected immediately.Aug 22, 23:22 UTC Monitoring - We've rolled out a fix for the lagging queue, and are monitoring its progress. Delays are expected.Aug 22, 21:30 UTC Identified - We have identified the slowdown as coming from an unusually high number of incoming invalidation events. The backlog is being slowly worked through, but not at an acceptable pace. We're now working on mitigations so we can process the backlog more rapidly and keep up with the incoming volume.Aug 22, 19:30 UTC Investigating - An invalidation queue is running behind. We are investigating the root cause. Users upgrading an unpaid org or user to a paid user or org may experience errors when accessing existing private packages. New private packages are unaffected. Adding or removing maintainers from a package will not be reflected in "npm owner ls".

npm
increased timeouts on registry and website

Jul 13, 23:18 UTC Resolved - Between the 3:55 and 4:10 (PDT ) an increased rate of 503s was observed on the registry and npmjs.com website. We believe we have isolated the underlying issue and corrected it. We will be looking into adding additional monitoring, to identify this specific category of failure faster.

npm
Increased 503 rates for EU users requesting scoped packages

Jun 29, 16:41 UTC Resolved - At around 14:10 UTC our monitoring alerted us of increased registry 503 rates. We've isolated the issue to our EU region and identified the cause to be a misconfiguration of our deployment stack on one of the servers in that region. The issue was resolved around 14:55 UTC . We've since taken additional steps in order to ensure consistency of our deployment configuration in other regions and will be taking steps to maintain that consistency.

npm
Increased error rates due to a CDN provider outage

Jun 28, 14:41 UTC Resolved - Starting at around 13:50 UTC our monitoring started reporting increased error rates, which we shortly after identified as related to an outage of our CDN provider. The situation was resolved by that provider at around 14:03 UTC . We are continuing to pay close attention to our metrics.

npm
Experiencing problems with scoped package downloads

Jun 13, 21:46 UTC Resolved - A service had fallen over and wasn't coming back up. We brought it back up.Jun 13, 21:40 UTC Investigating - We're experiencing an increased 503 rate for scoped package JSON documents as well as tarballs.

npm
Increased 503 rates for public packages

Jun 9, 14:30 UTC Resolved - We've identified the issue as a case of a service's process getting stuck and resolved it by rolling the affected service.Jun 9, 14:25 UTC Investigating - We're currently investigating issues related to accessing scoped packages.

npm
elevated level of 503s on /search endpoint

May 9, 21:10 UTC Resolved - This incident has been resolved.May 9, 19:38 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 9, 19:24 UTC Investigating - We are currently seeing an elevated level of 503s on all requests to our /search endpoint. The team is currently investigating.

npm
Tarball and package pages outage

May 8, 21:18 UTC Resolved - We have identified the outage to be caused by one of our package servers' TLS terminator shutting down due to a misconfiguration. In addition to this, that service did not restart properly, causing need for human intervention. We are taking steps to fix the configuration that caused the incident and add more detailed monitoring.May 8, 20:50 UTC Investigating - We are currently investigating an outage causing unavailability of registry's tarballs and website's package pages.

npm
503s for scoped packages

May 4, 13:52 UTC Resolved - This incident has been resolved.May 4, 12:38 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 4, 12:27 UTC Investigating - We are currently experiencing 503s for scoped packages, the team is investigating.

npm
elevated level of 503s in Europe

May 2, 15:19 UTC Resolved - This incident has been resolved.May 2, 15:08 UTC Monitoring - A fix has been implemented and we are monitoring the results.May 2, 15:00 UTC Investigating - We are currently seeing an elevated level of 503s for our Points of Presence in Europe, affecting both the registry and the website.

npm
package tarball issues in us-east.

Apr 21, 02:36 UTC Resolved - Appears to have been a network flicker. Apologies for the trouble!Apr 21, 02:35 UTC Investigating - We are investigating an issue with a tarball server in us-east.

npm
package tarball issues in us-east.

Apr 21, 02:35 UTC Investigating - We are investigating an issue with a tarball server in us-east.

npm
forced unpublishes currently failing

Mar 30, 02:00 UTC Resolved - we have verified that "unpublish -f" is back to working normally.Mar 30, 01:14 UTC Monitoring - We have pushed and tested a fix for forced package unpublishes and are watching logs to verify.Mar 29, 19:22 UTC Identified - A bug has been identified related to "npm unpublish -f", resulting in packages only partially being removed from the registry. This results in the packages continuing to be installable, but in users being unable to publish updates. We are currently working on a fix, and will keep this issue updated.

npm
increased rate of 503s in eu

Mar 28, 14:52 UTC Resolved - This incident has been resolved.Mar 28, 14:40 UTC Monitoring - orignal configuration has been restored we are keeping an eye on things.Mar 28, 14:02 UTC Identified - A networking issue caused many authentication requests to our eu servers timeout. We redirected this traffic to us_east for the time being and error rates have dropped.Mar 28, 13:50 UTC Investigating - We are currently investigating this issue.

npm
replication lag on publications

Mar 25, 03:31 UTC Resolved - Everything looks clear.Mar 24, 23:17 UTC Monitoring - The bug that was causing too many couchdb document updates has been fixed, and changes should now be flowing through the system at a normal rate again. We're monitoring to make sure the backlog drains as expected.Mar 24, 21:18 UTC Identified - We've identified the root cause of the replication delays. Publications have been accepted & stored, but replication on our CouchDB instances is behind because of unexpected load caused by a bug elsewhere that creates many document updates when certain package data is changed. We're working on a fix.Mar 24, 20:28 UTC Investigating - We are investigating lag on replicating publications to our leaf nodes as well as downstream to registry mirrors.

npm
Delayed Package Publishes

Mar 17, 16:33 UTC Resolved - We've resolved the problem that was causing delayed package publishes. All previous and current new package publishes should now work as expected.Mar 17, 15:32 UTC Monitoring - A fix has been implemented for the delayed package publishes and we are monitoring the results.Mar 17, 15:21 UTC Identified - We are currently investigating an issue where newly published package versions are delayed and therefore are inaccessible for installation. They appear updated on the website and the CLI does not throw an error on publish, yet an `npm install` of the newly published package version may fail. We've identified the source of the issue and are actively fixing the issue.

npm
Delayed Package Publishes

Mar 17, 15:32 UTC Monitoring - A fix has been implemented for the delayed package publishes and we are monitoring the results.Mar 17, 15:21 UTC Identified - We are currently investigating an issue where newly published package versions are delayed and therefore are inaccessible for installation. They appear updated on the website and the CLI does not throw an error on publish, yet an `npm install` of the newly published package version may fail. We've identified the source of the issue and are actively fixing the issue.

npm
Delayed Package Publishes

Mar 17, 15:21 UTC Identified - We are currently investigating an issue where newly published package versions are delayed and therefore are inaccessible for installation. They appear updated on the website and the CLI does not throw an error on publish, yet an `npm install` of the newly published package version may fail. We've identified the source of the issue and are actively fixing the issue.

npm
increased rate of 503s.

Mar 15, 16:27 UTC Resolved - An increased request rate caused by bot behavior caused slower than usual response times and increased error rates. Lasted from 15:00 - 15:45

npm
www search 503

Feb 16, 16:00 UTC Resolved - we found some invalid search queries causing increased load on our search cluster. we are adding additional filtering and should be out of the woods.Feb 16, 15:20 UTC Monitoring - we've drained a backlog of failing requests on our search cluster. serch seems to be working again. we are monitoring.Feb 16, 14:35 UTC Investigating - we are investigating an increased rate of 503s returned from website search.

npm
www search 503

Feb 16, 15:20 UTC Monitoring - we've drained a backlog of failing requests on our search cluster. serch seems to be working again. we are monitoring.Feb 16, 14:35 UTC Investigating - we are investigating an increased rate of 503s returned from website search.

npm
www search 503

Feb 16, 14:35 UTC Investigating - we are investigating an increased rate of 503s returned from website search.

npm
some readmes are unavailable on npmjs.com

Jan 26, 16:50 UTC Resolved - README rendering of new package publications was lagging behind, due to a production database running out of space. We have corrected this issue, and will be adding monitoring to prevent similar issues in the future. The database that ran out of space stored only README information, which has now caught up. No data was lost.Jan 26, 13:34 UTC Monitoring - the service is back up and running and we are monitoringJan 26, 13:09 UTC Investigating - we are looking into an issue serving readmes on the website.

npm
some readmes are unavailable on npmjs.com

Jan 26, 13:34 UTC Monitoring - the service is back up and running and we are monitoringJan 26, 13:09 UTC Investigating - we are looking into an issue serving readmes on the website.

npm
some readmes are unavailable on npmjs.com

Jan 26, 13:09 UTC Investigating - we are looking into an issue serving readmes on the website.

npm
Increased 503 rates for users in Los Angeles region

Jan 25, 13:44 UTC Resolved - Our monitoring indicated increased 503 rates originating from our CDN provider's LAX Point of Presence, starting at ~13:03 UTC up to ~13:40 UTC .

npm
increased origin latency resulting in burst of 503s

Jan 13, 23:03 UTC Resolved - briefly, between (2:54 - 2:58PM PDT ) we saw a increase in origin latency, resulting in increased 503s from the registry; the problem has been resolved and we are investigating the underlying cause.

npm
website not responding

Jan 5, 05:23 UTC Resolved - All services back to normal.Jan 5, 04:51 UTC Monitoring - We have successfully failed over to our secondary & removed the temporary workaround.Jan 5, 04:32 UTC Identified - We indeed lost a database primary host. We are failing over to the secondary now. In the meantime, we've deployed a workaround that restores website functionality, albeit with no billing data. The website will temporarily report placeholder billing data; this workaround will be backed out shortly.Jan 5, 04:24 UTC Investigating - Our website is not responding to requests right now. An instance providing a backing service has gone down. We are in the process of restarting and/or replacing the downed host.

npm
website not responding

Jan 5, 04:51 UTC Monitoring - We have successfully failed over to our secondary & removed the temporary workaround.Jan 5, 04:32 UTC Identified - We indeed lost a database primary host. We are failing over to the secondary now. In the meantime, we've deployed a workaround that restores website functionality, albeit with no billing data. The website will temporarily report placeholder billing data; this workaround will be backed out shortly.Jan 5, 04:24 UTC Investigating - Our website is not responding to requests right now. An instance providing a backing service has gone down. We are in the process of restarting and/or replacing the downed host.

npm
website not responding

Jan 5, 04:32 UTC Identified - We indeed lost a database primary host. We are failing over to the secondary now. In the meantime, we've deployed a workaround that restores website functionality, albeit with no billing data. The website will temporarily report placeholder billing data; this workaround will be backed out shortly.Jan 5, 04:24 UTC Investigating - Our website is not responding to requests right now. An instance providing a backing service has gone down. We are in the process of restarting and/or replacing the downed host.

npm
website not responding

Jan 5, 04:24 UTC Investigating - Our website is not responding to requests right now. An instance providing a backing service has gone down. We are in the process of restarting and/or replacing the downed host.

npm
Website unavailable

Dec 25, 14:13 UTC Resolved - Website is back up and we're working on resolving the root cause of the issue.Dec 25, 14:03 UTC Monitoring - We've restored Redis service and are monitoring website health.Dec 25, 13:42 UTC Investigating - We are investigating website unavailability caused by a failure of one of our Redis services.

npm
Website unavailable

Dec 25, 14:03 UTC Monitoring - We've restored Redis service and are monitoring website health.Dec 25, 13:42 UTC Investigating - We are investigating website unavailability caused by a failure of one of our Redis services.

npm
Website unavailable

Dec 25, 13:42 UTC Investigating - We are investigating website unavailability caused by a failure of one of our Redis services.

npm
Downloads API unavailability

Nov 22, 11:12 UTC Resolved - Downloads API is now back online. The unavailability was caused by an outdated SSL certificate being deployed to the machine handling api.npmjs.org traffic, which, due to a misconfiguration, our monitoring did not catch in time.Nov 22, 10:38 UTC Investigating - We are investigating unavailability of the downloads API (api.npmjs.org).

npm
Downloads API unavailability

Nov 22, 10:38 UTC Investigating - We are investigating unavailability of the downloads API (api.npmjs.org).

npm
increased incidence of 404s for recently-published packages

Nov 17, 23:39 UTC Resolved - We believe everything is back to normal.Nov 17, 21:27 UTC Monitoring - The configuration fix seems to have addressed the problem with some tarballs 404ing shortly after publication. Continuing to monitor.Nov 17, 18:29 UTC Investigating - We're looking into reports of recent publications 404ing after seeming to work initially. A misconfiguration has been identified & is being corrected, but we are investigating further.

npm
increased incidence of 404s for recently-published packages

Nov 17, 21:27 UTC Monitoring - The configuration fix seems to have addressed the problem with some tarballs 404ing shortly after publication. Continuing to monitor.Nov 17, 18:29 UTC Investigating - We're looking into reports of recent publications 404ing after seeming to work initially. A misconfiguration has been identified & is being corrected, but we are investigating further.

npm
increased incidence of 404s for recently-published packages

Nov 17, 18:29 UTC Investigating - We're looking into reports of recent publications 404ing after seeming to work initially. A misconfiguration has been identified & is being corrected, but we are investigating further.

npm
Elevatse 503s from SIN pop

Nov 7, 12:05 UTC Resolved - issue seems to be resolvedNov 7, 11:07 UTC Monitoring - error rates have dropped to normal levels. we will continue monitoringNov 7, 10:51 UTC Update - We made some changes to our cdn configuration to route more requests from SIN away from their previous backends. It looks like 503 rate is dropping but not resolved.Nov 7, 10:35 UTC Investigating - we are seeing elevated 503s from our cdns singapore pop.

npm
Elevatse 503s from SIN pop

Nov 7, 11:07 UTC Monitoring - error rates have dropped to normal levels. we will continue monitoringNov 7, 10:51 UTC Update - We made some changes to our cdn configuration to route more requests from SIN away from their previous backends. It looks like 503 rate is dropping but not resolved.Nov 7, 10:35 UTC Investigating - we are seeing elevated 503s from our cdns singapore pop.

npm
Elevatse 503s from SIN pop

Nov 7, 10:51 UTC Update - We made some changes to our cdn configuration to route more requests from SIN away from their previous backends. It looks like 503 rate is dropping but not resolved.Nov 7, 10:35 UTC Investigating - we are seeing elevated 503s from our cdns singapore pop.

npm
Elevatse 503s from SIN pop

Nov 7, 10:35 UTC Investigating - we are seeing elevated 503s from our cdns singapore pop.

npm
download counts API server replaced

Oct 28, 17:57 UTC Resolved - we have replaced and tuned-up https://api.npmjs.org/downloads/point/last-month, such that weekly and monthly statistics will again return results.

npm
replicate.npmjs.com replaced

Oct 21, 21:53 UTC Resolved - Today we replaced replicate.npmjs.com, the server that provides a public replication endpoint for npm packages. The new server is running on faster hardware, which should help solve stability issues that we have seen over the past several weeks. Users consuming the CouchDB _changes feed should reset their sequence number.

npm
Connectivity disruption for users in certain areas

Oct 21, 20:47 UTC Resolved - Our systems are back to full operation. Local DNS issues are still lingering for some users.Oct 21, 18:24 UTC Update - Mitigation has significantly improved internal registry error rates. Users may still be unable to connect to npm due to DNS issues local to them.Oct 21, 16:40 UTC Update - For updates on how this global DNS incident is affecting our CDN, watch their status incident: https://status.fastly.com/incidents/50qkgsyvk9s4Oct 21, 14:10 UTC Monitoring - We have rolled out our workaround for the DDOS on our DNS provider. Error rates have dropped. Continuing to monitor to ensure that all services are working again.Oct 21, 13:03 UTC Update - We are acting on a plan to give ourselves backup DNS for our internal services. For more information on the DDOS that is causing this outage, you can read Dyn's status page: https://www.dynstatus.com/incidents/nlr4yrr162t8Oct 21, 12:14 UTC Identified - Our DNS provider is currently experiencing a DDoS. We are investigating potential solutions on our part.Oct 21, 12:10 UTC Investigating - We are investigating connectivity problems reported by our monitoring. We suspect that they might be disrupting service for users in certain geographical areas, but have no information about what the areas are exactly or what the root cause for disruption is.

npm
Connectivity disruption for users in certain areas

Oct 21, 18:24 UTC Update - Mitigation has significantly improved internal registry error rates. Users may still be unable to connect to npm due to DNS issues local to them.Oct 21, 16:40 UTC Update - For updates on how this global DNS incident is affecting our CDN, watch their status incident: https://status.fastly.com/incidents/50qkgsyvk9s4Oct 21, 14:10 UTC Monitoring - We have rolled out our workaround for the DDOS on our DNS provider. Error rates have dropped. Continuing to monitor to ensure that all services are working again.Oct 21, 13:03 UTC Update - We are acting on a plan to give ourselves backup DNS for our internal services. For more information on the DDOS that is causing this outage, you can read Dyn's status page: https://www.dynstatus.com/incidents/nlr4yrr162t8Oct 21, 12:14 UTC Identified - Our DNS provider is currently experiencing a DDoS. We are investigating potential solutions on our part.Oct 21, 12:10 UTC Investigating - We are investigating connectivity problems reported by our monitoring. We suspect that they might be disrupting service for users in certain geographical areas, but have no information about what the areas are exactly or what the root cause for disruption is.

npm
Connectivity disruption for users in certain areas

Oct 21, 14:10 UTC Monitoring - We have rolled out our workaround for the DDOS on our DNS provider. Error rates have dropped. Continuing to monitor to ensure that all services are working again.Oct 21, 13:03 UTC Update - We are acting on a plan to give ourselves backup DNS for our internal services. For more information on the DDOS that is causing this outage, you can read Dyn's status page: https://www.dynstatus.com/incidents/nlr4yrr162t8Oct 21, 12:14 UTC Identified - Our DNS provider is currently experiencing a DDoS. We are investigating potential solutions on our part.Oct 21, 12:10 UTC Investigating - We are investigating connectivity problems reported by our monitoring. We suspect that they might be disrupting service for users in certain geographical areas, but have no information about what the areas are exactly or what the root cause for disruption is.

npm
Connectivity disruption for users in certain areas

Oct 21, 13:03 UTC Update - We are acting on a plan to give ourselves backup DNS for our internal services. For more information on the DDOS that is causing this outage, you can read Dyn's status page: https://www.dynstatus.com/incidents/nlr4yrr162t8Oct 21, 12:14 UTC Identified - Our DNS provider is currently experiencing a DDoS. We are investigating potential solutions on our part.Oct 21, 12:10 UTC Investigating - We are investigating connectivity problems reported by our monitoring. We suspect that they might be disrupting service for users in certain geographical areas, but have no information about what the areas are exactly or what the root cause for disruption is.

npm
Connectivity disruption for users in certain areas

Oct 21, 12:14 UTC Identified - Our DNS provider is currently experiencing a DDoS. We are investigating potential solutions on our part.Oct 21, 12:10 UTC Investigating - We are investigating connectivity problems reported by our monitoring. We suspect that they might be disrupting service for users in certain geographical areas, but have no information about what the areas are exactly or what the root cause for disruption is.

npm
Connectivity disruption for users in certain areas

Oct 21, 12:10 UTC Investigating - We are investigating connectivity problems reported by our monitoring. We suspect that they might be disrupting service for users in certain geographical areas, but have no information about what the areas are exactly or what the root cause for disruption is.

npm
skimdb.npmjs.com has been replaced

Oct 18, 20:56 UTC Resolved - Today we replaced skimdb.npmjs.com, the server that provides a public replication endpoint for un-scoped npm packages. The new server is running on new hardware, which should help solve stability issues that we have seen over the past several weeks. Users consuming the CouchDB _changes feed should reset their sequence number.

npm
Increased 503s for the npmjs.com website

Oct 10, 05:41 UTC Resolved - A blip on a package metadata box left zombie connections on a DB box, consuming needed connections & causing 503s on the website. The old connections have been flushed and the service appears to have recovered.Oct 10, 04:47 UTC Monitoring - The server that failed came back online and we were able to reconstitute the services backing the npmjs.com website to mitigate the cause of the 503s. We are currently monitoring the situation.Oct 10, 04:31 UTC Investigating - We're currently seeing an increased number of 503 responses from the npmjs.com website due to an unanticipated server failure. We are currently investigating the situation.

npm
Increased 503s for the npmjs.com website

Oct 10, 04:47 UTC Monitoring - The server that failed came back online and we were able to reconstitute the services backing the npmjs.com website to mitigate the cause of the 503s. We are currently monitoring the situation.Oct 10, 04:31 UTC Investigating - We're currently seeing an increased number of 503 responses from the npmjs.com website due to an unanticipated server failure. We are currently investigating the situation.

npm
Increased 503s for the npmjs.com website

Oct 10, 04:31 UTC Investigating - We're currently seeing an increased number of 503 responses from the npmjs.com website due to an unanticipated server failure. We are currently investigating the situation.

npm
replicate.npmjs.com outage

Oct 7, 23:17 UTC Resolved - Friday afternoon, between the hours of 2PM PDT and 4PM PDT , we experienced a partial outage of the registry's scoped replication endpoint. All systems are back online, and we have added additional monitoring to detect this category of failure faster in the future.

npm
Increased 503 rate on tarball downloads served from Sydney

Sep 28, 05:45 UTC Resolved - The 503 rate has returned to nominal levels.Sep 28, 05:24 UTC Monitoring - 503 rate decreased, continuing to monitor.Sep 28, 04:59 UTC Investigating - Increased 503 rate on tarball downloads served from Sydney.

npm
Increased 503 rate on tarball downloads served from Sydney

Sep 28, 05:24 UTC Monitoring - 503 rate decreased, continuing to monitor.Sep 28, 04:59 UTC Investigating - Increased 503 rate on tarball downloads served from Sydney.

npm
Increased 503 rate on tarball downloads served from Sydney

Sep 28, 04:59 UTC Investigating - Increased 503 rate on tarball downloads served from Sydney.

npm
Increased 503 rates for European users

Sep 14, 08:08 UTC Resolved - Our monitoring reports that error rates returned to base levels and both the website and the registry should be accessible to European users as usual.Sep 14, 07:46 UTC Investigating - We're seeing increased 503 rates for both the website and the registry for European users. We're suspecting the root cause to be routing-related and are following up with our CDN provider.

npm
Increased 503 rates for European users

Sep 14, 07:46 UTC Investigating - We're seeing increased 503 rates for both the website and the registry for European users. We're suspecting the root cause to be routing-related and are following up with our CDN provider.

npm
elevated rates of 500s from the registry

Sep 2, 18:49 UTC Resolved - 50x error rates for the registry appear to be back to normal.Sep 2, 18:37 UTC Monitoring - We've identified some wedged processes & restarted them. Monitoring response rates to confirm the fix.Sep 2, 18:12 UTC Investigating - We are investigating elevated rates of 503s and 504s on registry requests right now.

npm
elevated rates of 500s from the registry

Sep 2, 18:37 UTC Monitoring - We've identified some wedged processes & restarted them. Monitoring response rates to confirm the fix.Sep 2, 18:12 UTC Investigating - We are investigating elevated rates of 503s and 504s on registry requests right now.

npm
elevated rates of 500s from the registry

Sep 2, 18:12 UTC Investigating - We are investigating elevated rates of 503s and 504s on registry requests right now.

npm
elevated rates of 500s from the registry

Sep 2, 18:12 UTC Investigating - We are investigating elevated rates of 503s and 504s on registry requests right now.

npm
Increased 503 rates for European users

Aug 28, 17:51 UTC Resolved - We've replaced affected hardware and European users should be going through our European infrastructure again.Aug 28, 15:05 UTC Monitoring - We redirected European traffic to other, healthy regions, as we continue to restore our European infrastructure.Aug 28, 14:58 UTC Identified - We're currently experiencing a hardware failure in our European region.

npm
Increased 503 rates for European users

Aug 28, 15:05 UTC Monitoring - We redirected European traffic to other, healthy regions, as we continue to restore our European infrastructure.Aug 28, 14:58 UTC Identified - We're currently experiencing a hardware failure in our European region.

npm
Increased 503 rates for European users

Aug 28, 15:05 UTC Monitoring - We redirected European traffic to other, healthy regions, as we continue to restore our European infrastructure.Aug 28, 14:58 UTC Identified - We're currently experiencing a hardware failure in our European region.

npm
Increased 503 rates for European users

Aug 28, 14:58 UTC Identified - We're currently experiencing a hardware failure in our European region.

npm
Increased 503 rates for European users

Aug 28, 14:58 UTC Identified - We're currently experiencing a hardware failure in our European region.

npm
public-skimdb is offline while we investigate a configuration error

Aug 24, 23:35 UTC Resolved - skimdb.npmjs.com is back in service. The sequence number advertised by couchdb has changed. Your follow scripts might need to be updated.Aug 24, 21:08 UTC Monitoring - A brand new registry public replication point has been provisioned & is catching up. It'll be online at the usual url & IP address once its database is caught up.Aug 24, 20:22 UTC Investigating - It will be back online as soon as we have repaired the error.

npm
public-skimdb is offline while we investigate a configuration error

Aug 24, 21:08 UTC Monitoring - A brand new registry public replication point has been provisioned & is catching up. It'll be online at the usual url & IP address once its database is caught up.Aug 24, 20:22 UTC Investigating - It will be back online as soon as we have repaired the error.

npm
public-skimdb is offline while we investigate a configuration error

Aug 24, 20:22 UTC Investigating - It will be back online as soon as we have repaired the error.

npm
"fs" unpublished and restored

Aug 23, 20:34 UTC Resolved - For a few minutes today the package "fs" was unpublished from the registry in response to a user report that it was spam. It has been restored. This was a human error on my (@seldo's) part; I failed to properly follow our written internal process for checking if an unpublish is safe. My apologies to the users and builds we disrupted. More detail: the "fs" package is a non-functional package. It simply logs the word "I am fs" and exits. There is no reason it should be included in any modules. However, something like 1000 packages *do* mistakenly depend on "fs", probably because they were trying to use a built-in node module called "fs". Given this, we should have deprecated the module instead of unpublishing it, and this is what our existing process says we should do. If any of your modules are depending on "fs", you can safely remove it from your dependencies, and you should. But if you don't, things will continue to work indefinitely.

npm
Increased 502 and 503 rates for the website

Aug 19, 10:20 UTC Resolved - At 7:55 UTC our monitoring alerted us to increased 502 and 503 rates for the website. We've determined the culprit to be a stuck Node.js process and restarted it at 8:12 UTC , which fixed the issue immediately. We will continue investigating the root cause of this problem.

npm
Intermittent website timeouts and 503s

Aug 15, 22:52 UTC Resolved - Starting at 22:06 UTC our monitoring started reporting increased 503 and timeout rate originating from the website. This was resolved at 22:30 UTC .

npm
Intermittent website timeouts and 503s

Aug 15, 22:52 UTC Resolved - Starting at 22:06 UTC our monitoring started reporting increased 503 and timeout rate originating from the website. We detected what we think was an accidental mis-use of an endpoint that was creating extraordinary load. We have blocked the IP responsible and are contacting the user involved. This was resolved at 22:30 UTC .

npm
skimdb.npmjs.com IP address change & sequence number jump

Aug 6, 02:13 UTC Completed - Cutover is complete. DNS will take time to propagate, so the older hardware is still serving the unscoped public registry database and will continue to do so until Monday.Aug 6, 02:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 6, 01:07 UTC Scheduled - The hardware underlying skimdb.npmjs.com instance turned out to be under-provisioned. To cope with observed load we're replacing it with a faster instance. When we do this cutover, we'll also take the opportunity to assign a permanent IP address for skimdb.npmjs.com to help make cutting over to hot spares much smoother in the future. You will observe two changes: skimdb.npmjs.com will begin resolving to 52.205.193.105, and the advertised sequence number of the registry database in the couchdb changes feed will change.

npm
skimdb.npmjs.com IP address change & sequence number jump

Aug 6, 02:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 6, 01:07 UTC Scheduled - The hardware underlying skimdb.npmjs.com instance turned out to be under-provisioned. To cope with observed load we're replacing it with a faster instance. When we do this cutover, we'll also take the opportunity to assign a permanent IP address for skimdb.npmjs.com to help make cutting over to hot spares much smoother in the future. You will observe two changes: skimdb.npmjs.com will begin resolving to 52.205.193.105, and the advertised sequence number of the registry database in the couchdb changes feed will change.

npm
skimdb.npmjs.com IP address change & sequence number jump

Aug 6, 01:07 UTC Scheduled - The hardware underlying skimdb.npmjs.com instance turned out to be under-provisioned. To cope with observed load we're replacing it with a faster instance. When we do this cutover, we'll also take the opportunity to assign a permanent IP address for skimdb.npmjs.com to help make cutting over to hot spares much smoother in the future. You will observe two changes: skimdb.npmjs.com will begin resolving to 52.205.193.105, and the advertised sequence number of the registry database in the couchdb changes feed will change.

npm
skimdb.npmjs.com unavailable

Aug 5, 15:04 UTC Resolved - The new skimdb.npmjs.com should now be fully accessible.Aug 5, 14:23 UTC Monitoring - We replaced the failing host and repointed the DNS record. Please note that since the underlying host for skimdb.npmjs.com was replaced, the sequence number has changed. If you're following skimdb.npmjs.com for replication purposes, it is advisable to restart your followers from sequence 0.Aug 5, 11:39 UTC Identified - The underlying virtual host handling skimdb.npmjs.com traffic became unavailable. We are working on bringing up a replacement host.

npm
skimdb.npmjs.com unavailable

Aug 5, 14:23 UTC Monitoring - We replaced the failing host and repointed the DNS record. Please note that since the underlying host for skimdb.npmjs.com was replaced, the sequence number has changed. If you're following skimdb.npmjs.com for replication purposes, it is advisable to restart your followers from sequence 0.Aug 5, 11:39 UTC Identified - The underlying virtual host handling skimdb.npmjs.com traffic became unavailable. We are working on bringing up a replacement host.

npm
skimdb.npmjs.com unavailable

Aug 5, 11:39 UTC Identified - The underlying virtual host handling skimdb.npmjs.com traffic became unavailable. We are working on bringing up a replacement host.

npm
skimdb.npmjs.com unavailable

Aug 5, 11:39 UTC Identified - The underlying virtual host handling skimdb.npmjs.com traffic became unavailable. We are working on bringing up a replacement host.

npm
Intermittent website timeouts and 503's

Aug 3, 13:19 UTC Resolved - Starting at 11:13 UTC our monitoring started reporting increased 503 and timeout rate originating from the website caused by increased crawler traffic. While investigating the issue we discovered that some of our services weren't configured to load balance across database servers, which caused one of them to become overloaded due to increased traffic. We reconfigured our services to alleviate that and saw error and timeout rates drop to base levels around 12:46 UTC .

npm
Increased 503 rates

Jul 26, 16:44 UTC Resolved - Our monitoring indicates that no more burst were registered after 14:23 UTC . We are still investigating the root cause.Jul 26, 14:45 UTC Monitoring - Our monitoring registered a short burst of increased 503 rates, starting at 14:03 UTC and ending at 14:23 UTC . We are monitoring the situation and increasing monitoring in order to determine the root cause.

npm
Increased 503 rates

Jul 26, 14:45 UTC Monitoring - Our monitoring registered a short burst of increased 503 rates, starting at 14:03 UTC and ending at 14:23 UTC . We are monitoring the situation and increasing monitoring in order to determine the root cause.

npm
Increased 503 rates

Jul 25, 17:08 UTC Resolved - We confirmed that 503 rates have dropped down to base levels for an extended period of time. No root cause has been determined yet, but we will continue investigating.Jul 25, 14:58 UTC Monitoring - 503 rates have dropped down. We are continuing to monitor the incident.Jul 25, 14:05 UTC Investigating - We've confirmed that the situation is an on-going, albeit intermittent problem and are investigating.Jul 25, 13:19 UTC Monitoring - Starting at 12:30 UTC we saw an increase in 503 responses from the registry. The situation seemed to have resolved at 13:10 UTC , but we will continue monitoring and investigating the reasons of this outage.

npm
Increased 503 rates

Jul 25, 14:58 UTC Monitoring - 503 rates have dropped down. We are continuing to monitor the incident.Jul 25, 14:05 UTC Investigating - We've confirmed that the situation is an on-going, albeit intermittent problem and are investigating.Jul 25, 13:19 UTC Monitoring - Starting at 12:30 UTC we saw an increase in 503 responses from the registry. The situation seemed to have resolved at 13:10 UTC , but we will continue monitoring and investigating the reasons of this outage.

npm
Increased 503 rates

Jul 25, 14:58 UTC Monitoring - 503 rates have dropped down. We are continuing to monitor the incident.Jul 25, 14:05 UTC Investigating - We've confirmed that the situation is an on-going, albeit intermittent problem and are investigating.Jul 25, 13:19 UTC Monitoring - Starting at 12:30 UTC we saw an increase in 503 responses from the registry. The situation seemed to have resolved at 13:10 UTC , but we will continue monitoring and investigating the reasons of this outage.

npm
Increased 503 rates

Jul 25, 14:05 UTC Investigating - We've confirmed that the situation is an on-going, albeit intermittent problem and are investigating.Jul 25, 13:19 UTC Monitoring - Starting at 12:30 UTC we saw an increase in 503 responses from the registry. The situation seemed to have resolved at 13:10 UTC , but we will continue monitoring and investigating the reasons of this outage.

npm
Increased 503 rates

Jul 25, 14:05 UTC Investigating - We've confirmed that the situation is an on-going, albeit intermittent problem and are investigating.Jul 25, 13:19 UTC Monitoring - Starting at 12:30 UTC we saw an increase in 503 responses from the registry. The situation seemed to have resolved at 13:10 UTC , but we will continue monitoring and investigating the reasons of this outage.

npm
Increased 503 rates

Jul 25, 13:19 UTC Monitoring - Starting at 12:30 UTC we saw an increase in 503 responses from the registry. The situation seemed to have resolved at 13:10 UTC , but we will continue monitoring and investigating the reasons of this outage.

npm
Slew of 503s on npmjs.com

Jul 20, 18:10 UTC Resolved - The npmjs.com website was down for up to 15 minutes today, between 10:30 AM PT and 10:45 AM PT. This was due to a web spider, which slammed the website with a 500% spike in requests in the span of that time. We have identified the bot and banned its user agent.

npm
Reported 502s serving tarballs to some regions

Jul 6, 17:01 UTC Resolved - Issues with tarball serving should be resolved now. A writeup of the outage will be made available on the npmjs.org blog.Jul 6, 14:54 UTC Identified - We have identified the root cause of the issue serving some tarballs and are currently executing a fix.Jul 6, 13:47 UTC Update - we have confirmed the issue and are working on identificationJul 6, 13:38 UTC Investigating - We are currently investigating this issue.

npm
Reported 502s serving tarballs to some regions

Jul 6, 14:54 UTC Identified - We have identified the root cause of the issue serving some tarballs and are currently executing a fix.Jul 6, 13:47 UTC Update - we have confirmed the issue and are working on identificationJul 6, 13:38 UTC Investigating - We are currently investigating this issue.

npm
Reported 502s serving tarballs to some regions

Jul 6, 13:47 UTC Update - we have confirmed the issue and are working on identificationJul 6, 13:38 UTC Investigating - We are currently investigating this issue.

npm
Reported 502s serving tarballs to some regions

Jul 6, 13:38 UTC Investigating - We are currently investigating this issue.

npm
Some couchdb hosts are behind

Jun 16, 22:46 UTC Resolved - The couchdb in question rebooted; registry package metadata should be up to date.Jun 16, 22:38 UTC Investigating - Some registry requests will have stale data.

npm
Some couchdb hosts are behind

Jun 16, 22:38 UTC Investigating - Some registry requests will have stale data.

npm
Sydney EC2 outage disrupting registry traffic

Jun 5, 07:12 UTC Resolved - we were seeing an increased number of 503s in Sydney POPs, as a result of an ongoing EC2 outage

npm
Access cache issues preventing access to private modules

Jun 1, 00:10 UTC Resolved - Flushing the cache appears to have resolved the issue.May 31, 21:45 UTC Monitoring - The cache has been flushed as of 2:44pm PST and we are monitoring for more issues. If you continue to encounter issues publishing or installing private packages, please contact support with the name(s) of the affected packages.

npm
Access cache issues preventing access to private modules

May 31, 21:45 UTC Monitoring - The cache has been flushed as of 2:44pm PST and we are monitoring for more issues. If you continue to encounter issues publishing or installing private packages, please contact support with the name(s) of the affected packages.

npm
Errors with authenticated actions

Apr 26, 01:12 UTC Resolved - This incident has been resolved.Apr 26, 01:03 UTC Monitoring - Issues with www are now also resolved. Our apologies for the disruption.Apr 26, 00:40 UTC Update - Registry installs and logins are now corrected. The www.npmjs.com website is still seeing some issues.Apr 26, 00:34 UTC Update - Actions requiring authentication, including: user profile pages, user logins, and publishes of packages are affected. Engineers are working on the issue.Apr 26, 00:22 UTC Identified - We have identified a configuration problem in a deploy and engineers are correcting it.Apr 26, 00:12 UTC Investigating - We are investigating an elevated rate of 500 responses returned during private package installation.

npm
Errors with authenticated actions

Apr 26, 01:03 UTC Monitoring - Issues with www are now also resolved. Our apologies for the disruption.Apr 26, 00:40 UTC Update - Registry installs and logins are now corrected. The www.npmjs.com website is still seeing some issues.Apr 26, 00:34 UTC Update - Actions requiring authentication, including: user profile pages, user logins, and publishes of packages are affected. Engineers are working on the issue.Apr 26, 00:22 UTC Identified - We have identified a configuration problem in a deploy and engineers are correcting it.Apr 26, 00:12 UTC Investigating - We are investigating an elevated rate of 500 responses returned during private package installation.

npm
Errors with authenticated actions

Apr 26, 00:40 UTC Update - Registry installs and logins are now corrected. The www.npmjs.com website is still seeing some issues.Apr 26, 00:34 UTC Update - Actions requiring authentication, including: user profile pages, user logins, and publishes of packages are affected. Engineers are working on the issue.Apr 26, 00:22 UTC Identified - We have identified a configuration problem in a deploy and engineers are correcting it.Apr 26, 00:12 UTC Investigating - We are investigating an elevated rate of 500 responses returned during private package installation.

npm
Errors with authenticated actions

Apr 26, 00:34 UTC Update - Actions requiring authentication, including: user profile pages, user logins, and publishes of packages are affected. Engineers are working on the issue.Apr 26, 00:22 UTC Identified - We have identified a configuration problem in a deploy and engineers are correcting it.Apr 26, 00:12 UTC Investigating - We are investigating an elevated rate of 500 responses returned during private package installation.

npm
500s on private scoped packages.

Apr 26, 00:22 UTC Identified - We have identified a configuration problem in a deploy and engineers are correcting it.Apr 26, 00:12 UTC Investigating - We are investigating an elevated rate of 500 responses returned during private package installation.

npm
500s on private scoped packages.

Apr 26, 00:12 UTC Investigating - We are investigating an elevated rate of 500 responses returned during private package installation.

npm
package tarballs availability in us-west

Apr 16, 02:00 UTC Resolved - metadata has been cleaned and failed versions have been removed where possible.Apr 15, 22:01 UTC Update - From 13:49 to 13:55 Pacific Time, one of our package servers was in a "wedged" state. Publishes that relied on this host during that period (about half of all publishes) appeared to succeed but in fact failed. As a result, approximately 50 packages have recently published versions that are missing their corresponding tarball file. We are identifying the failed publishes and removing them, so the affected packages will return to their last good version. Once cleanup is complete we will update this incident. If your package was affected, simply increment your patch version and re-publish. This will work even before cleanup is complete. Our apologies for the disruption.Apr 15, 20:58 UTC Monitoring - All tarballs hosts are back online and functioning. We're continuing to monitor.Apr 15, 20:54 UTC Identified - One of our packages tarballs hosts is experiencing difficulties. We're failing over to a hot spare but some users in the western US might observe difficulties installing during the cutover.

npm
package tarballs availability in us-west

Apr 15, 22:01 UTC Update - From 13:49 to 13:55 Pacific Time, one of our package servers was in a "wedged" state. Publishes that relied on this host during that period (about half of all publishes) appeared to succeed but in fact failed. As a result, approximately 50 packages have recently published versions that are missing their corresponding tarball file. We are identifying the failed publishes and removing them, so the affected packages will return to their last good version. Once cleanup is complete we will update this incident. If your package was affected, simply increment your patch version and re-publish. This will work even before cleanup is complete. Our apologies for the disruption.Apr 15, 20:58 UTC Monitoring - All tarballs hosts are back online and functioning. We're continuing to monitor.Apr 15, 20:54 UTC Identified - One of our packages tarballs hosts is experiencing difficulties. We're failing over to a hot spare but some users in the western US might observe difficulties installing during the cutover.

npm
package tarballs availability in us-west

Apr 15, 20:58 UTC Monitoring - All tarballs hosts are back online and functioning. We're continuing to monitor.Apr 15, 20:54 UTC Identified - One of our packages tarballs hosts is experiencing difficulties. We're failing over to a hot spare but some users in the western US might observe difficulties installing during the cutover.

npm
package tarballs availability in us-west

Apr 15, 20:54 UTC Identified - One of our packages tarballs hosts is experiencing difficulties. We're failing over to a hot spare but some users in the western US might observe difficulties installing during the cutover.

npm
garbled package metadata delivered to old npm clients

Apr 7, 03:36 UTC Resolved - We believe all package data with incorrect caching headers has been replaced in cache.Apr 6, 20:15 UTC Monitoring - A change made to the npm registry on 2016-04-04 unexpectedly triggered build failures for some users of older versions of the npm CLI (including those packaged by LTS OS distributions). A fix that the npm operational team rolled out was incomplete, but has since been correctly deployed across our infrastructure. Please see https://github.com/npm/npm/issues/12196#issuecomment-206525005 for more details.

npm
garbled package metadata delivered to old npm clients

Apr 6, 20:15 UTC Monitoring - A change made to the npm registry on 2016-04-04 unexpectedly triggered build failures for some users of older versions of the npm CLI (including those packaged by LTS OS distributions). A fix that the npm operational team rolled out was incomplete, but has since been correctly deployed across our infrastructure. Please see https://github.com/npm/npm/issues/12196#issuecomment-206525005 for more details.

npm
Partially-failed publications earlier today

Mar 25, 22:57 UTC Resolved - Failed versions have been cleaned up from metadata for nearly all of the packages affected. We were unable to repair 23 newly-published packages and will be in contact with the publishers.Mar 25, 21:41 UTC Monitoring - We are currently in the process of removing failed versions from package metadata. Most authors have already published newer versions to negate the severity of the failed publications. We appreciate your understanding and patience in this matter.Mar 25, 19:43 UTC Identified - We have identified 74 packages (with 83 total publications) affected by the misconfiguration. We are diligently working to fix the metadata for the failed versions so that they can be republished by their authors.Mar 25, 18:39 UTC Investigating - A bad configuration released at 17:39 UTC today caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The visible symptom is that the registry will report that the version exists, but respond with a 404 when the tarball is requested. As of 18:15 UTC , the misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Partially-failed publications earlier today

Mar 25, 21:41 UTC Monitoring - We are currently in the process of removing failed versions from package metadata. Most authors have already published newer versions to negate the severity of the failed publications. We appreciate your understanding and patience in this matter.Mar 25, 19:43 UTC Identified - We have identified 74 packages (with 83 total publications) affected by the misconfiguration. We are diligently working to fix the metadata for the failed versions so that they can be republished by their authors.Mar 25, 18:39 UTC Investigating - A bad configuration released at 17:39 UTC today caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The visible symptom is that the registry will report that the version exists, but respond with a 404 when the tarball is requested. As of 18:15 UTC , the misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Partially-failed publications earlier today

Mar 25, 19:43 UTC Identified - We have identified 74 packages (with 83 total publications) affected by the misconfiguration. We are diligently working to fix the metadata for the failed versions so that they can be republished by their authors.Mar 25, 18:39 UTC Investigating - A bad configuration released at 17:39 UTC today caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The visible symptom is that the registry will report that the version exists, but respond with a 404 when the tarball is requested. As of 18:15 UTC , the misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Partially-failed publications earlier today

Mar 25, 18:39 UTC Investigating - A bad configuration released at 17:39 UTC today caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The visible symptom is that the registry will report that the version exists, but respond with a 404 when the tarball is requested. As of 18:15 UTC , the misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Partially-failed publications earlier today

Mar 25, 17:39 UTC Investigating - A bad configuration released earlier today caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The visible symptom is that the registry will report that the version exists, but respond with a 404 when the tarball is requested. The misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Partially-failed publications earlier today

Mar 25, 18:39 UTC Investigating - A bad configuration released earlier in today caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The visible symptom is that the registry will report that the version exists, but respond with a 404 when the tarball is requested. The misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Partially-failed publications earlier today

Mar 25, 18:39 UTC Investigating - A mis-configuration earlier in the day caused package publications to fail partially, leaving an uninstallable version recorded in the package metadata. The misconfiguration has been fixed, but we are investigating now to identify the packages affected & clean up the metadata.

npm
Increased publish error rates

Mar 25, 16:48 UTC Resolved - We've removed affected hardware from the publish path, and are working on adding additional capacity. Publishes should be back to normal. Our apologies.Mar 25, 16:43 UTC Identified - We're seeing increased error rates on publishes due to a hardware failure. We're working on replacing affected instances.

npm
Increased publish error rates

Mar 25, 16:43 UTC Identified - We're seeing increased error rates on publishes due to a hardware failure. We're working on replacing affected instances.

npm
Increased 502 rates for publishes

Mar 24, 19:20 UTC Resolved - We've confirmed that the issue is now fixed, and that publishes work across the board again. Our apologies.Mar 24, 19:00 UTC Monitoring - We've updated configuration on affected servers and are now monitoring the situation further.Mar 24, 18:40 UTC Identified - We've identified the problem to be a configuration file pointing at a server that no longer exists. We're reconfiguring our servers in order to fix the issue.Mar 24, 18:33 UTC Investigating - We're investigating increased rates of 502 responses to publishes from our servers in the US East datacenter.

npm
Increased 502 rates for publishes

Mar 24, 19:00 UTC Monitoring - We've updated configuration on affected servers and are now monitoring the situation further.Mar 24, 18:40 UTC Identified - We've identified the problem to be a configuration file pointing at a server that no longer exists. We're reconfiguring our servers in order to fix the issue.Mar 24, 18:33 UTC Investigating - We're investigating increased rates of 502 responses to publishes from our servers in the US East datacenter.

npm
Increased 502 rates for publishes

Mar 24, 18:40 UTC Identified - We've identified the problem to be a configuration file pointing at a server that no longer exists. We're reconfiguring our servers in order to fix the issue.Mar 24, 18:33 UTC Investigating - We're investigating increased rates of 502 responses to publishes from our servers in the US East datacenter.

npm
Increased 502 rates for publishes

Mar 24, 18:33 UTC Investigating - We're investigating increased rates of 502 responses to publishes from our servers in the US East datacenter.

npm
increased publish latency

Mar 24, 17:46 UTC Resolved - We've confirmed that the issue is now fixed, and that publishes are back to normal. Our apologies.Mar 24, 17:36 UTC Monitoring - We've fixed the misconfiguration and are now monitoring.publishes.Mar 24, 17:29 UTC Identified - We've identified the issue to be caused by a misconfiguration of some of the hosts which handle storing tarballs. We are working on resolving that misconfiguration.Mar 24, 17:16 UTC Investigating - we have noticed an increase in publish delay and failure rate. we are investigating

npm
increased publish latency

Mar 24, 17:36 UTC Monitoring - We've fixed the misconfiguration and are now monitoring.publishes.Mar 24, 17:29 UTC Identified - We've identified the issue to be caused by a misconfiguration of some of the hosts which handle storing tarballs. We are working on resolving that misconfiguration.Mar 24, 17:16 UTC Investigating - we have noticed an increase in publish delay and failure rate. we are investigating

npm
increased publish latency

Mar 24, 17:29 UTC Identified - We've identified the issue to be caused by a misconfiguration of some of the hosts which handle storing tarballs. We are working on resolving that misconfiguration.Mar 24, 17:16 UTC Investigating - we have noticed an increase in publish delay and failure rate. we are investigating

npm
increased publish latency

Mar 24, 17:16 UTC Investigating - we have noticed an increase in publish delay and failure rate. we are investigating

npm
Website and private package installation issues

Mar 24, 02:52 UTC Resolved - The failed hardware has been replaced and services have been switched back into redundant mode.Mar 24, 01:07 UTC Monitoring - All services have been failed over and service should be restored. We're double-checking our work before closing this incident.Mar 24, 00:51 UTC Update - We are failing over to redundant hardware.Mar 24, 00:43 UTC Identified - A hardware failure is causing website downtime and problems with installing private packages. Engineers are taking mitigation steps.

npm
Website and private package installation issues

Mar 24, 01:07 UTC Monitoring - All services have been failed over and service should be restored. We're double-checking our work before closing this incident.Mar 24, 00:51 UTC Update - We are failing over to redundant hardware.Mar 24, 00:43 UTC Identified - A hardware failure is causing website downtime and problems with installing private packages. Engineers are taking mitigation steps.

npm
Website and private package installation issues

Mar 24, 00:51 UTC Update - We are failing over to redundant hardware.Mar 24, 00:43 UTC Identified - A hardware failure is causing website downtime and problems with installing private packages. Engineers are taking mitigation steps.

npm
Website and private package installation issues

Mar 24, 00:43 UTC Identified - A hardware failure is causing website downtime and problems with installing private packages. Engineers are taking mitigation steps.

npm
increase 503 rate from our Australia based tarball servers

Mar 17, 22:04 UTC Resolved - Errors rates have been returned to normal.Mar 17, 21:47 UTC Investigating - We are currently investigating this issue.

npm
Increased 504 rates for tarballs

Mar 17, 20:11 UTC Resolved - the servers in question have been removed from rotation. access has been confirmed.Mar 17, 19:06 UTC Update - Some recently-published packages are temporarily unavailable due to capacity issues on older package servers. The older servers are being taken out of rotation, which will restore access to these packages.Mar 17, 18:58 UTC Identified - We are seeing increased error rates due to faulty tarballs servers. We are working on removing those servers from rotation.

npm
elevated 503s for json from the SJC PoP

Mar 11, 16:16 UTC Resolved - This incident has been resolved.Mar 11, 15:24 UTC Investigating - We are currently investigating this issue.

npm
503s on installs

Mar 10, 21:35 UTC Resolved - The problem has been resolved.Mar 10, 21:17 UTC Investigating - There is some disruption in registry traffic. The registry team is looking into it.

npm
elevated 404 rates for scoped modules

Mar 9, 19:16 UTC Resolved - From 10:17am PST to 10:39am PST today the registry was returning 404s for some scoped module tarballs. We rolled out a configuration change to our CDN and served an incorrect version format in package.json files. The incident was resolved by rolling back to the previous configuration. 404 rates are now back to normal.

npm
Increased registry 503 rates

Mar 2, 04:33 UTC Resolved - We've now resolved this issue for all the packages. We will be publishing a post-mortem regarding this issue. Our apologies!Mar 2, 02:26 UTC Identified - We've resolved the issue for most of packages and are currently working on full resolution.Mar 2, 02:15 UTC Investigating - We are seeing increased rates of 503 responses from the registry and we're investigating.

npm
Investigating 503 rate on registry

Feb 22, 22:16 UTC Resolved - The issues with the POP appear to be resolved.Feb 22, 21:32 UTC Update - The POP outage is affecting www traffic as well.Feb 22, 21:29 UTC Investigating - Investigations currently point at problems with one of our CDN's POPs.

npm
Difficulties with our CDN

Feb 18, 21:15 UTC Resolved - Registry 503 responses were elevated because of errors with one of the points of presence of our CDN provider. They identified & fixed the incident. (You can read more details in their incident report here: https://status.fastly.com/incidents/gjgcmfljjdpk .) Registry responses should be back to normal.Feb 18, 20:22 UTC Investigating - Registry services are responding with 503 errors at an elevated rate possibly because of issues with our CDN. We are investigating.

npm
Increased WWW 503 rates for some users

Feb 18, 03:13 UTC Resolved - We started observing short influxes of increased 503 rates for some WWW users starting at around 10:30 UTC today, up to 15:00 UTC today. This was identified as a misconfiguration and has since been fixed, with additional steps being taken to prevent this class of a problem in the future. Our apologies.

npm
Publish failures

Feb 5, 00:33 UTC Resolved - Publications are back to normal.Feb 5, 00:25 UTC Monitoring - Replication has been repaired and all endpoints are now up to date with reality.Feb 5, 00:15 UTC Identified - There is an ongoing delay with replication of new package publishes. Publishes are successful but are not visible everywhere. Engineers are working on the issue and we will update shortly.Feb 4, 23:43 UTC Monitoring - Publications were failing because of a bug in our new publication flow that stacked up open connections to the registry write primary. We've rolled back to earlier code that is a little slower but doesn't leak connections while we debug the problem in our testing environment. Publications should now be succeeding.Feb 4, 23:09 UTC Investigating - We are investigating failing writes to the registry.

npm
503s on www and auth failures on CLI

Jan 30, 08:10 UTC Resolved - Our authentication services were briefly overloaded by an external event, leading to auth failures on package pages and the command line. Monitoring detected the incident and it was resolved after approximately 30 minutes of instability.

npm
Elevated 503 rates for users in Europe

Jan 28, 23:16 UTC Resolved - The registry was returning elevated 503 rates to our users in Europe from 7:00 UTC to 13:00 UTC due to a misconfiguration of one of the package servers. Our apologies!

npm
Intermittent scoped module installation failures from US East

Jan 27, 17:31 UTC Resolved - The registry was returning elevated rates of 503s from a specific origin host that US East users were likely to be routed to. This was due to a misconfiguration that caused requests to an internal service to be rejected. This configuration has been corrected. 503 rates are back down to normal.

npm
Intermittent publish failures

Jan 22, 04:38 UTC Resolved - This incident has been resolved.Jan 22, 03:48 UTC Monitoring - Timeouts while publishing have been resolved, but we continue to monitor the situation.Jan 22, 03:06 UTC Investigating - We're investigating intermittent publish failures.

npm
Elevated 503 rates for users in Europe

Jan 17, 14:07 UTC Resolved - The configuration problem that caused this issue is now solved.Jan 17, 13:46 UTC Investigating - Our users in Europe are currently experiencing elevated 503 rates. We're investigating.

npm
elevated 503 and error rates on registry and website

Dec 31, 22:56 UTC Resolved - Registry and website services should now be back to normal.Dec 31, 22:27 UTC Update - Our CDN's points of presence are returning to normal operating status. The npm registry and website are also returning to normal responsiveness.Dec 31, 21:47 UTC Monitoring - We are seeing an elevated level of failures (503s and network timeouts) affecting all users of both the registry and the npm website. This issue is due to an ongoing degradation of service at our CDN. This issue is known to them and they are actively working to address it. We are monitoring the incident. You can monitor their status directly at their incident report: https://status.fastly.com/incidents/hbllcfkkmkpg

npm
Networking issues in AWS us-east-1

Nov 5, 23:10 UTC Resolved - This incident has been resolved.Nov 5, 23:05 UTC Investigating - We have identified some intermittent networking issues affecting our servers in Amazon's us-east-1 data center. Only a small subset of our capacity lives in us-east, but users on the east coast may be seeing degraded performance. We are monitoring.

npm
503s on www

Oct 21, 23:13 UTC Resolved - Our primary cache server for the website failed. The site is supposed to fail over to the spare cache server automatically, but due to a bug this didn't happen. Instead, we manually failed over to the spare server. The website was totally unavailable for 15 minutes from 15:47 to 16:02 Pacific Time. Our apologies!

npm
503s to European users

Oct 18, 16:11 UTC Resolved - Networking issues caused users in Europe to receive higher numbers of HTTP 503 responses (caused by network timeouts to servers in the US) from 15:31 to 16:02 GMT today. These issues are now resolved.

npm
Elevated 503 rates on WWW

Oct 16, 07:40 UTC Resolved - From 06:46 to 07:31 UTC the website was returning elevated rates of 503 errors. The issue was resolved by a configuration fix and the website is now operating correctly.

npm
Public replication endpoint under heavy load

Oct 2, 21:09 UTC Resolved - This migration is now complete and all is well.Oct 2, 20:15 UTC Monitoring - skimdb.npmjs.com is again operational. If your DNS is resolving to 52.23.180.84, then you are communicating with the new host. We're continuing to monitor.Oct 2, 19:54 UTC Update - The public skimdb registry database is briefly offline while we cut over to the newer hardware.Oct 2, 18:00 UTC Identified - We have identified that our public replication endpoint at skimdb.npmjs.com is under heavy load due to organic traffic growth. We are moving to bigger hardware in the next 24 hours, which will result in a brief interruption in replication but all followers should recover automatically with no need for intervention. We will update this incident once the migration is complete.

npm
Empty tarballs responses

Sep 16, 16:29 UTC Resolved - Cache invalidation is complete.Sep 16, 15:55 UTC Monitoring - Cache invalidation is ongoing and should be complete within 15 minutes.Sep 16, 15:25 UTC Identified - A bug in code we rolled out to our canary server at 10:52 UTC today, and which our automated checks haven't caught caused us to return empty tarball responses to some of our users. We rolled the deployment back at 14:29 UTC and are working on invalidating caches affected by this outage.

npm
Elevated 503 rates in Australia

Sep 14, 21:42 UTC Resolved - Network issues have been resolved. We continue to work on longer-term architectural changes to be more resilient to trans-pacific network problems.Sep 14, 02:19 UTC Identified - We are working with our CDN to address the issue with our Australian PoP. In the meantime, Australian users can temporarily bypass the local PoP and go directly to a US PoP by setting their hosts file to point registry.npmjs.com to 23.235.39.162 -- one of our US POP IPs. If you use this workaround, please remember to reset your hosts file once the incident is resolved, as PoP IPs change periodically.Sep 14, 00:19 UTC Investigating - We are seeing an elevated level of failures (503s and network timeouts) affecting users near our CDN PoP in Australia. We are actively investigating.

npm
Older packages temporarily unavailable

Aug 5, 18:50 UTC Resolved - A security upgrade released yesterday accidentally prevented the registry from serving ~200 very old packages whose package data was incomplete. These packages combined account for less than 0.02% of registry downloads, so their errors did not trigger our monitoring overnight. Following user reports, we corrected the error this morning. The packages were unavailable for a period of ~14 hours, from 5pm PT 2015-07-04 to 7am PT 2015-08-05. Even though the absolute numbers involved are small, the absence of these packages severely disrupted some users and we apologize. We are putting additional auditing into place around releases to ensure this kind of accident cannot occur in future.

npm
External replication endpoint is unavailable

Jul 31, 20:53 UTC Resolved - All views are now up to date. Please contact support@npmjs.com if you have any issues get your mirrors back in sync. We have initiated some engineering work to make recovery from this kind of hardware failure much faster in future.Jul 31, 16:49 UTC Update - The replication endpoint at skimdb.npmjs.com is now back up for most use-cases. Some views are still being populated.Jul 31, 04:03 UTC Monitoring - Restoring data is going to take several hours. Our apologies.Jul 31, 00:51 UTC Identified - A hardware failure has taken our external replication endpoint at skimdb.npmjs.com offline. Engineers are working to replace this endpoint.