Increased latency for API and website response times
Incident Report for Atlassian Bitbucket
Postmortem

SUMMARY

From 13:57 on November 25th, 2020 - 19:50 on November 27th, 2020 UTC, a portion of data synchronization within Atlassian systems was delayed by up to 54 hours, with a subset of real-time customer functionality being down for the first 15 hours of the timeframe. The incident was caused by an outage to AWS service that Atlassian cloud infrastructure depends on. Customers of Atlassian’s Cloud Platform observed the following impact:

Across multiple products, users experienced delays in completion of new sign-ups, user deletions, authentication and authorization policy changes, updating of search results, propagating product-emitted triggers to Forge apps, activity and in-app notifications, features behind a personalized rollout flag being served incorrectly, and inability to at-mention new users who signed up. In addition, service was downgraded for the following product capabilities:

  • Jira - Automation rules for Jira were delayed in being enacted, and activity details did not propagate accurately to Jira’s Your Work page and start.atlassian.com for the duration of the outage
  • Confluence - Search results and analytics functionality like page views were not updated; user permission changes were also lagging for the duration of the outage
  • Trello - Search results were not updated and user permission changes were lagging for the duration of the outage
  • Opsgenie - Logging out, user invites, and user access post on-boarding were delayed for the duration of the outage
  • Bitbucket - Delays in push and merge operations for the duration of the outage
  • Statuspage - User invites, new sign-up completion, and user permission changes were delayed for the duration of the outage

The incident was detected within 8 minutes via our automated monitoring systems. We mitigated the impact by redirecting our internal asynchronous communication traffic from the US East region to the US West region which put our systems into a known good state. We were able to restore all product functionality for customers within 15 hours and the total time to resolution including clearing the backlog of data synchronization was about 54 hours & 19 minutes.

ROOT CAUSE

The event was triggered by a significant AWS outage (https://aws.amazon.com/fr/message/11201/) for 14 hours in the US East region. Atlassian’s Enterprise Service Bus (ESB) is the backbone for async communication between services and systems. The ESB has a hard dependency on AWS Kinesis, which was part of the AWS outage. As a result, a significant portion of the data flow within Atlassian systems was either delayed or could not succeed due to the data pipe that carries communications following a user’s activity being down. This outage impacted customers in across the globe.

TECHNICAL REASONS

Atlassian has many internal systems that perform follow-up actions after a user’s interaction with our products. Examples of such follow-up actions include propagating correct authentication and authorization policy updates, updating our search indexes, provisioning access for new users post sign-up, and automation triggering after a data update. All of these systems rely on being informed asynchronously via our Enterprise Event Bus about the prior action a user has taken, or a data change that has occurred. Our Enterprise Event Bus in turn is dependent on AWS Kinesis, a data processing platform that broadcasts messages between message producing systems and client systems interested in consuming a subset of messages each, depending on the client’s designated follow-up functionality. A total outage of AWS Kinesis in one of our major geographic regions, US East, led to a significant outage for Atlassian due to the inability to propagate any information within our systems via our Enterprise Event Bus.

REMEDIAL ACTIONS PLAN & NEXT STEPS

During the post-incident review, we have identified enhancements in our technical architecture, and resilience measures to counter failures of our Enterprise Service Bus and AWS Kinesis. Moving forward, to minimize a hard dependency on AWS Kinesis, we will implement automated migration of customer traffic to a Kinesis instance in another geographic region during an outage, and better retention of data at key stages of data flow within our systems to improve data synchronization posterity in case of an outage.

Posted Dec 08, 2020 - 23:14 UTC

Resolved
Between 4:39PM and 5:45PM UTC, we experienced increased latency and errors for Atlassian Bitbucket's website and APIs. Pipelines and webhooks also experienced errors during the incident window.

The issue was caused by a failure in our analytics processing infrastructure and this has been resolved and all Bitbucket services are operating normally now.
Posted Nov 26, 2020 - 19:17 UTC
Monitoring
We have identified the root cause of the increased website and API latency and have mitigated the problem.

Operations such as Pipelines, Webhooks, Creating and Merging Pull Requests are working again.

We are now monitoring closely.
Posted Nov 26, 2020 - 18:41 UTC
Identified
We are investigating cases of degraded performance for Bitbucket's website and APIs. We narrowed down the issue to our analytics event processing infrastructure. We are working on a fix and will provide more details within the next hour.
Posted Nov 26, 2020 - 17:20 UTC
This incident affected: Marketplace Apps (AWS CodeDeploy App) and Website, API, SSH, Authentication and user management, Git via HTTPS, Webhooks, Pipelines, Signup.