How to Export Commit and Pull Request Data from Bitbucket to CSV

June 18, 2024
#Reporting#How To#Bitbucket
13 min

Being a universal file type, CSV serves as a go-to format for integrations between the applications. It allows for transferring a large amount of data across the systems, blending it, and building custom reports. To export commit and pull request data from Bitbucket Data Center to a CSV file, you can use the Awesome Graphs for Bitbucket app.

In this article, we’ll show you two ways how to use the app to export engineering data to CSV for further integration, organization, and processing in analytics tools and custom solutions.

What commit and pull request data you will get

The methods described later in the article will give you two kinds of generated CSV files, depending on whether you export data about commits or PRs.

In the case of commit data, you’ll get a list of commits with the following details:

  • creation date
  • author’s email, name, and username
  • repository and project name
  • commit hash
  • whether it is a merge commit or not
  • number of lines of code added and deleted

commits export

The resulting CSV with a list of pull requests will contain the following information:

  • pull request creation and last updated date 
  • author’s name and username
  • repository and project name
  • PR state and its ID
  • reviewers’ names and usernames
  • PR cycle time and its phases: time to open, pickup time, review time, time to resolve

pull request (PR) export

Exporting from the People page

You can export raw commit and pull request data to CSV directly from Bitbucket. When you click All users in the People dropdown menu at the header, you’ll get to the People page with a global overview of developers’ activity based on the number of commits or pull requests.

At the top-right corner, you’ll notice the Export menu, where you can choose CSV.

Export Commit and Pull Request Data from Bitbucket

By default, the page shows developers’ contributions made within a month, but you can choose a longer period up to a quarter. The filtering applies not only to the GUI but also to the data exported, so if you don’t change the timespan, you’ll get a list of commits or pull requests for the last 30 days.

Exporting via the REST API resources

Awesome Graphs REST API allows you to retrieve and export commit and pull request data to a CSV file on global, project, repository, and user levels. This functionality is aimed to automate the processes you used to handle manually and streamline the existing workflows.

To find the REST API resources, you can go to the in-app documentation by choosing Export → REST API on the People page (accessible to Awesome Graphs’ users), or to our documentation website.

In the article, we’ll show you two examples of the resources and how they work: one for exporting commits and another for pull requests. The rest of the resources follow the same model, so you can apply these principles to get the data you need.

Export commits to CSV

This resource exports a list of commits with their details from all Bitbucket projects and repositories to a CSV file.

Here is the curl request example:

curl -X GET -u username:password "https://bitbucket.your-company-name.com/rest/awesome-graphs-api/latest/commits/export/csv"

Alternatively, you can use any REST API client like Postman or put the URL directly into your browser’s address bar (you need to be authenticated in Bitbucket in this browser), and you’ll get a generated CSV file.

By default, it exports the data for the last 30 days. However, you can set a timeframe for exported data up to one year (366 days) with sinceDate / untilDate parameters:

curl -X GET -u username:password "https://bitbucket.your-company-name.com/rest/awesome-graphs-api/latest/commits/export/csv?sinceDate=2024-05-01&untilDate=2024-05-15"

For commit resources, you can also use the query parameters such as merges to filter merge/non-merge commits or order to specify the order to return commits in.

You can read more about this resource and its parameters in our documentation, as well as find other resources for exporting commit data on project, repository, and user levels.

Export pull requests to CSV

The pull request resources work similarly, so to export a list of pull requests with their details from all Bitbucket projects and repositories to a CSV file, you need to make the following curl request:

curl -X GET -u username:password "https://bitbucket.your-company-name.com/rest/awesome-graphs-api/latest/pull-requests/export/csv"

The sinceDate / untilDate parameters can also be applied to state the timespan up to a year, but here you have an additional parameter dateType, allowing you to choose either the creation date or the date of the last update as a filtering criterion. So, if you set dateType to created, only the pull requests created during the stated period will be returned, while dateType set to updated will include the pull requests that were updated within the time frame.

Another pull request specific parameter is state, which allows you to filter the response to only include openmerged, or declined pull requests.

For example, the following request will return a list of open pull requests, which were updated between May 1st and May 15th:

curl -X GET -u username:password "https://bitbucket.your-company-name.com/rest/awesome-graphs-api/latest/commits/export/csv?dateType=updated&state=open&sinceDate=2024-05-01&untilDate=2024-05-15"

Refer to our documentation for more details about this resource and its parameters. Additionally, you can find information on other resources for exporting pull request data at the project, repository, and user levels.

Integrate intelligently

While CSV is supported by many systems and is quite comfortable to manage, it is not the only way for software integrations the Awesome Graphs for Bitbucket app offers. Using the REST API, you can make data flow between the applications, automate the workflow, and get custom reports tailored to your needs.

For Awesome Graphs Data Center clients, we offer a Premium Support subscription aimed at enhancing efficiency and saving time while working with our app and its REST API resources. It includes personalized assistance from our experts who will create scripts in Python, Bash, Java, or Kotlin to interact with our REST APIs and build reports tailored to your specific requirements. By using these advanced capabilities, our clients optimize their workflows, gain deeper insights into their engineering activities, and drive informed decision-making.

How to Get the Number of Commits and Lines of Code in Pull Requests

May 16, 2024
#How To#Bitbucket#Reporting
10 min

Counting lines of code manually for each pull request to analyze your current Bitbucket database could take years. As a solution, we suggest automating this process with Awesome Graphs for Bitbucket and Python. This article will show you how to get the number of commits and lines of code in pull requests from Bitbucket Data Center and build a pull request size report on the repository level.

Why pull request size matters

According to the research conducted by the Cisco Systems programming team, where they tried to determine the best practices for code review, they found out that the pull request size should not include more than 200 to 400 lines of code. Maintaining the size of pull requests within these limits is helpful for:

  • speeding up the review process 
  • enhancing code readability
  • keeping reviewers focused and attentive to details, as this amount of information is optimal for the brain to process effectively at a time
  • contributing to more thorough and efficient reviews. 

All this, in turn, enhances overall code quality and streamlines delivery.

What pull request report you will get

With the help of Awesome Graphs for Bitbucket and Python, you can get a CSV file containing a list of pull requests created during the specified period, along with the number of commits and lines of code added and deleted in them. The report will also contain the authors’ emails and the date of creation and closure of each pull request.

how to get the number of commits and lines of code in pull requests from Bitbucket Data Center and build a pull request size report

How to get a pull request size report

To get the report described above, we’ll run the following script that will make requests into the REST API, and do all the calculations and aggregation for us.

import requests
import csv
import sys

bitbucket_url = sys.argv[1]
login = sys.argv[2]
password = sys.argv[3]
project = sys.argv[4]
repository = sys.argv[5]
since = sys.argv[6]
until = sys.argv[7]

get_prs_url = bitbucket_url + '/rest/awesome-graphs-api/latest/projects/' + project + '/repos/' + repository \
            + '/pull-requests'

s = requests.Session()
s.auth = (login, password)


class PullRequest:

    def __init__(self, title, pr_id, author, created, closed):
        self.title = title
        self.pr_id = pr_id
        self.author = author
        self.created = created
        self.closed = closed


class PullRequestWithCommits:

    def __init__(self, title, pr_id, author, created, closed, commits, loc_added, loc_deleted):
        self.title = title
        self.pr_id = pr_id
        self.author = author
        self.created = created
        self.closed = closed
        self.commits = commits
        self.loc_added = loc_added
        self.loc_deleted = loc_deleted


def get_pull_requests():

    pull_request_list = []

    is_last_page = False

    while not is_last_page:

        response = s.get(get_prs_url, params={'start': len(pull_request_list), 'limit': 1000,
                                      'sinceDate': since, 'untilDate': until}).json()

        for pr_details in response['values']:

            title = pr_details['title']
            pd_id = pr_details['id']
            author = pr_details['author']['user']['emailAddress']
            created = pr_details['createdDate']
            closed = pr_details['closedDate']

            pull_request_list.append(PullRequest(title, pd_id, author, created, closed))

        is_last_page = response['isLastPage']

    return pull_request_list


def get_commit_statistics(pull_request_list):

    pr_list_with_commits = []

    for pull_request in pull_request_list:

        print('Processing Pull Request', pull_request.pr_id)

        commit_ids = []

        is_last_page = False

        while not is_last_page:

            url = bitbucket_url + '/rest/api/latest/projects/' + project + '/repos/' + repository \
                + '/pull-requests/' + str(pull_request.pr_id) + '/commits'
            response = s.get(url, params={'start': len(commit_ids), 'limit': 25}).json()

            for commit in response['values']:
                commit_ids.append(commit['id'])

            is_last_page = response['isLastPage']

        commits = 0
        loc_added = 0
        loc_deleted = 0

        for commit_id in commit_ids:

            commits += 1

            url = bitbucket_url + '/rest/awesome-graphs-api/latest/projects/' + project + '/repos/' + repository \
                + '/commits/' + commit_id
            response = s.get(url).json()

            if 'errors' not in response:
                loc_added += response['linesOfCode']['added']
                loc_deleted += response['linesOfCode']['deleted']
            else:
                pass

        pr_list_with_commits.append(PullRequestWithCommits(pull_request.title, pull_request.pr_id, pull_request.author,
                                                           pull_request.created, pull_request.closed, commits,
                                                           loc_added, loc_deleted))

    return pr_list_with_commits


with open('{}_{}_pr_size_stats_{}_{}.csv'.format(project, repository, since, until), mode='a', newline='') as report_file:

    report_writer = csv.writer(report_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
    report_writer.writerow(['title', 'id', 'author', 'created', 'closed', 'commits', 'loc_added', 'loc_deleted'])

    for pr in get_commit_statistics(get_pull_requests()):
        report_writer.writerow([pr.title, pr.pr_id, pr.author, pr.created, pr.closed, pr.commits, pr.loc_added, pr.loc_deleted])

print('The resulting CSV file is saved to the current folder.')

To make this script work, you’ll need to install the requests module in advance, the csv and sys modules are available in Python out of the box. Then, you need to pass seven arguments to the script when executed: the URL of your Bitbucket, login, password, project key, repository name, since date, until date. Here’s an example:

py script.py https://bitbucket.your-company-name.com login password PRKEY repo-name 2023-11-31 2024-02-01

At the end of the execution, the resulting file will be saved to the same folder next to the script.

Want more?

The Awesome Graphs for Bitbucket app and its REST API, in particular, allow you to get much more than described here, and we want to help you get the most out of it. Exclusively for our Data Center clients, we offer a Premium Support Subscription where our tech team will help you write custom scripts for our REST API to get the data you need. Contact us if you have an issue you’d like to solve, and we will assist you.

Here are a few how-tos that may be of help right now:

Related posts

    Pull Request Analytics: How to Get Pull Request Cycle Time / Lead Time for Bitbucket

    May 2, 2024
    #How To#Bitbucket#Reporting
    14 min

    In this article, we’ll describe two ways to get pull request Cycle Time / Lead Time for Bitbucket Data Center using the Awesome Graphs for Bitbucket app.

    What Pull Request Cycle Time is and why it is important

    Pull Request Cycle Time / Lead Time is a powerful metric to look at while evaluating the engineering teams’ productivity. It helps track the development process from the first moment the code was written in a developer’s IDE and up to the time it’s deployed to production.

    Pull Request Analytics

    Please note that we define Cycle Time / Lead Time as the time between the developer’s first commit and the time it’s merged and will refer to it as Cycle Time throughout the article.

    The Cycle Time is commonly composed of four metrics:

    • Time to open (from the first commit to open)
    • Pickup time (from open to the first non-author comment)
    • Review time (from the first comment to approval)
    • Time to resolve (from approved to merge or decline)

    With this information, you can get an unbiased view of the engineering department’s speed and capacity and find the points to drive improvement. It can also be an indicator of business success, as controlling the pull request Cycle Time can increase output and efficiency and deliver products faster.

    How to find Cycle Time in Bitbucket

    Using Awesome Graphs for Bitbucket, you can track the average Cycle Time of pull requests at the project and repository levels. Additionally, you can find the specific Cycle Time of a single pull request.

    For each Bitbucket project and repository, the app displays the average time it takes to resolve pull requests as well as the breakdown of the average time by stage. You can also configure the report to see the average Cycle Time of a particular team or user in a chosen project or repo.

    pull request cycle time report in Bitbucket

    Below the report, you can find the list of all pull requests included in it, along with their Cycle Time.

    find Cycle Time in Bitbucket

    Clicking on a value in the Cycle Time column against a particular pull request allows you to see the breakdown of the Cycle Time and analyze each metric.

    cycle time metrics

    How to export Time to Open, Time to Review, Time to Approve, and Time to Merge metrics

    Another way to track Cycle Time is to export pull request data from Bitbucket and build a custom report. You can get all the necessary statistics from the Awesome Graphs for Bitbucket app in two different ways: via REST API and into a CSV file from Bitbucket UI.

    Export Cycle Time data from Bitbucket UI

    Using Awesome Graphs, you can obtain data directly from Bitbucket. Here is an example of a file you’ll get:  

    How to export Cycle time, Time to Open, Time to Review, Time to Approve, and Time to Merge metrics from Bitbucket

    To get this report, go to the People page and select All users from the People dropdown menu in the header. 

    how to find Awesome Graphs People page to export commit and pull request data from Bitbucket

    By default, you’ll see an overview of developers’ activity based on the number of commits. To check pull request contributions, select pull requests as the activity type in the configuration. It not only filters the data visible on the GUI but also determines what is exported, ensuring you receive a CSV file containing the needed data. To export pull requests, choose the Export menu at the top-right corner of the page and select CSV.

    Export Cycle Time data from Bitbucket UI

    Get Time to Open, Pickup time, Review time, and Time to resolve via REST API

    Another option to get Cycle Time metrics from Bitbucket is to use Awesome Graphs REST API. It allows exporting pull request statistics on user, repository, project, and global levels in JSON format as well as in a preformatted CSV file.

    To find all available REST API resources, visit our documentation website or select REST API from the Export menu at the top-right corner of the People page in Bitbucket.

    Get Time to Open, Pickup time, Review time, and Time to resolve via REST API

    To retrieve Cycle time and its four phases in JSON format, simply add the withCycleTime=true parameter to any Get pull requests resources.

    Here is an example of a curl request to export pull requests of a specific repository:

    curl -X GET -u username:password "https://%bitbucket-host%/rest/awesome-graphs-api/latest/projects/{projectKey}/repos/{repositorySlug}/pull-requests?withCycleTime=true"

    While exporting to a CSV file, you don’t need to add extra parameters to get pull requests Cycle time. An example of a curl request to retrieve data on a user level will be as follows:

    curl -X GET -u username:password "https://%bitbucket-host%/rest/awesome-graphs-api/latest/users/{userSlug}/pull-requests/export/csv"

    After running the request, the report you’ll get will look the same as the one we showed above, explaining how to export directly from Bitbucket UI. By default, it exports the data for the last 30 days. However, you can set a timeframe for exported data up to one year with sinceDate / untilDate parameters. 

    How to build a Cycle Time report

    After you generated a CSV file, you can process it in analytics tools such as Tableau, PowerBI, Qlik, Looker, visualize this data on your Confluence pages with the Table Filter, Charts & Spreadsheets for Confluence app, or integrate it in any custom solution of your choice for further analysis. 

    Cycle Time report in Confluence

    An example of the data visualized with Table Filter, Charts & Spreadsheets for Confluence.

    In this article, you will find more details on how to build a Cycle Time report in Confluence using the Table Filter, Charts & Spreadsheets for Confluence app.

    By measuring Cycle Time, you can:

    • See objectively whether the development process is getting faster or slower.
    • Analyze the correlation of the specific metrics with the overall cycle time.
    • Compare the results of the particular teams and users within the organization or across the industry.

    With Awesome Graphs for Bitbucket, you can gain more visibility into the development process and facilitate project management. Using the app as a data provider tool will help build tailored reports and address your particular needs. Plus, exclusively for our Data Center clients, we now offer a Premium Support subscription where our technical team having in-depth knowledge of Bitbucket data is on hand to write custom scripts and swiftly resolve your specific use cases.

    Feel free to contact us if you’d like to discover whether our app can address your specific needs.

    Related posts

      How to Count Lines of Code in Bitbucket to Decide what SonarQube License You Need

      April 30, 2024
      #Reporting#How To#Bitbucket
      12 min

      SonarQube is a popular automatic code review tool used to detect bugs and vulnerabilities in the source code through static analysis. While the Community Edition is free and open-source, the Developer, Enterprise, and Data Center editions are priced per instance per year and based on the number of lines of code (LOC). So if you are considering buying a license for SonarQube, you need to count lines of code in Bitbucket for all projects and repositories you want to analyze.

      In this post, we’ll show how you can count LOC for your Bitbucket Data Center instance, as well as for each project or repository using the Awesome Graphs’ REST API resources and Python.


      Awesome Graphs for Bitbucket is a data-providing and reporting tool that allows you to export commits, lines of code, and pull requests statistics on global, project, repository, and user levels. It also offers out-of-the-box graphs and reports to deliver instant answers to your questions.


      How to count lines of code for the whole Bitbucket instance

      Getting lines of code statistics for the whole Bitbucket instance is pretty straightforward and will only require making one call to the Awesome Graphs’ REST API. Here is an example of the curl command:

      curl -X GET -u username:password "https://bitbucket.your-company-name.com/rest/awesome-graphs-api/latest/commits/statistics"

      And the response will look like this:

      {
          "linesOfCode":{
              "added":5958278,
              "deleted":2970874
          },
          "commits":61387
      }
      

      It returns the number of lines added and deleted as well as the total number of commits in all Bitbucket projects and repositories. To get the total LOC, you’ll simply need to subtract the number of deleted from the added.

      Please note that blank lines are also counted in lines of code statistics in this and the following cases.

      How to count lines of code for each project in the instance

      You can also use the REST API resource to get the LOC for a particular project, but doing this for each project in your Bitbucket instance will definitely take a while. That’s why we are going to automate this process with a simple Python script that will run through all of your projects, count the total LOC for each one, and then will save the list of project keys with their total LOC to a CSV file.

      The resulting CSV will look like this:

      count lines of code for each project in the whole Bitbucket instance

      Here is the script to get it:

      import requests
      import csv
      import sys
      
      bitbucket_url = sys.argv[1]
      bb_api_url = bitbucket_url + '/rest/api/latest'
      ag_api_url = bitbucket_url + '/rest/awesome-graphs-api/latest'
      
      s = requests.Session()
      s.auth = (sys.argv[2], sys.argv[3])
      
      def get_project_keys():
      
          projects = list()
      
          is_last_page = False
      
          while not is_last_page:
              request_url = bb_api_url + '/projects'
              response = s.get(request_url, params={'start': len(projects), 'limit': 25}).json()
      
              for project in response['values']:
                  projects.append(project['key'])
              is_last_page = response['isLastPage']
      
          return projects
      
      def get_total_loc(project_key):
      
          url = ag_api_url + '/projects/' + project_key + '/commits/statistics'
          response = s.get(url).json()
          total_loc = response['linesOfCode']['added'] - response['linesOfCode']['deleted']
      
          return total_loc
      
      
      with open('total_loc_per_project.csv', mode='a', newline='') as report_file:
      
          report_writer = csv.writer(report_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
          report_writer.writerow(['project_key', 'total_loc'])
      
          for project_key in get_project_keys():
              print('Processing project', project_key)
              report_writer.writerow([project_key, get_total_loc(project_key)])
      

      To make this script work, you’ll need to install the requests in advance, the csv and sys modules are available in Python out of the box. You need to pass three arguments to the script when executed: the URL of your Bitbucket, login, and password. Here’s an example:

      py script.py https://bitbucket.your-company-name.com login password

      The resulting file will be saved in the same folder as the script after the execution.

      How to count lines of code for each repository in the project

      In case you need statistics on a particular repository you can make a single call to the Awesome Graphs’ REST API. If you need to get the total LOC for each repository in the specified project, a simple Python script will help again. Here, the resulting CSV file will include the list of repo slugs in the specified project and their LOC totals:

      count lines of code for each repository in Bitbucket project

      The script that will make all calculations:

      import requests
      import csv
      import sys
      
      bitbucket_url = sys.argv[1]
      bb_api_url = bitbucket_url + '/rest/api/latest'
      ag_api_url = bitbucket_url + '/rest/awesome-graphs-api/latest'
      
      s = requests.Session()
      s.auth = (sys.argv[2], sys.argv[3])
      
      project_key = sys.argv[4]
      
      
      def get_repos(project_key):
          
          repos = list()
      
          is_last_page = False
      
          while not is_last_page:
              request_url = bb_api_url + '/projects/' + project_key + '/repos'
              response = s.get(request_url, params={'start': len(repos), 'limit': 25}).json()
              for repo in response['values']:
                  repos.append(repo['slug'])
              is_last_page =  response['isLastPage']
      
          return repos
      
      
      def get_total_loc(repo_slug):
      
          url = ag_api_url + '/projects/' + project_key + \
                '/repos/' + repo_slug + '/commits/statistics'
          response = s.get(url).json()
          total_loc = response['linesOfCode']['added'] - response['linesOfCode']['deleted']
      
          return total_loc
      
      
      with open('total_loc_per_repo.csv', mode='a', newline='') as report_file:
          report_writer = csv.writer(report_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
          report_writer.writerow(['repo_slug', 'total_loc'])
      
          for repo_slug in get_repos(project_key):
              print('Processing repository', repo_slug)
              report_writer.writerow([repo_slug, get_total_loc(repo_slug)])
      

      To make it work, you need to pass the URL of your Bitbucket, login, password, and project key. Here’s an example:

      py script.py https://bitbucket.your-company-name.com login password PROJECTKEY

      Once the execution is finished, the resulting file will be saved in the same folder as the script.

      Want to learn more?

      We should note that the total LOC we get in each case shows the number of lines added minus lines deleted for all branches. Due to these peculiarities, some repos may have negative LOC numbers, so it might be useful to look at the LOC for a default branch and compare it to the LOC for all branches.

      If you would like to learn how to get this information with the help of Awesome Graphs for Bitbucket, write here in the comments or create a request in our Help Center, and we’ll assist you.

      If you also looking to search for commits in Bitbucket, our blog post suggests three different ways to do this.

      How to Find Pull Request Metrics in Bitbucket

      December 21, 2023
      #How To#Project management#Bitbucket#Reporting#Analytics
      9 min

      Measuring pull request metrics in Bitbucket can be tricky and time-consuming, as it encompasses a wide range of data scattered around different repositories, teams, and projects. This article will explore what PR metrics are worth tracking, why, and how to find them using an on-prem app called Awesome Graphs for Bitbucket.


      Awesome Graphs for Bitbucket is an app for engineering managers and their teams that transforms invisible developers’ activities into easy-to-understand graphs and reports. It helps over 1,6K customers, including Apple, Oracle, and MasterCard, manage engineering productivity, remove process blockers, and improve delivery.


      Let’s dive into tips for measuring pull request data.

      Number of Pull Requests

      Tracking the number of pull requests gives an overview of the development activity within a project. However, simply counting pull requests provides superficial information and does not reflect actual team progress. That’s why it is important to look at the number of created, merged, and declined PRs, as well as their ratio.

      Creating more pull requests may indicate more active development and a fast-paced project. But, it could also mean higher cycle time if no reviewers are available, increased code complexity, or irrational distribution of efforts if not managed effectively.

      Therefore, it is better to look at this metric together with the number of merged pull requests as they generally indicate progress and contributions accepted into the project. They are also one of the indicators of an effective code review process.

      Using Awesome Graphs for Bitbucket, visualized statistics of these metrics and their ratio can be found in the Created vs Merged Pull Requests Report. It displays how many pull requests were created and merged during a particular time.

      pull requests in bitbucketOne more metric to look at is the number of declined pull requests. While a certain number of them are usual practices due to bugs or outdated changes, high rates might indicate misalignment with project goals, lack of code quality, or poor communication.

      The Awesome Graphs’ Pie Chart Report allows you to see the number of opened, merged and declined pull requests and their ratio to assess the health of the development process.

      Number of Pull RequestsThis report can also be reconfigured to see the number of PRs merged without reviewers, as code without checking could lead to poor quality, vulnerabilities, or misalignment with project standards, impacting the overall quality of the software.

      bitbucket pull request metrics

      Pull Request Cycle Time

      One more metric that is worth tracking is cycle time. It provides insights into how quickly PRs are reviewed, approved, and merged and helps evaluate a code review process and speed of delivery. Usually, a shorter cycle time means a more efficient development process.

      It is also beneficial to examine the four phases of cycle time to see more accurately where bottlenecks and inefficiencies are in the process. Here they are:

      1. Time to Open reflects how quickly the code review process is initiated.
      2. Pickup Time indicates the reviewers’ workload, priorities, and communication efficiency.
      3. Review Time provides insights into the speed of giving feedback, implementation of changes, and reaching agreements.
      4. Time to Deploy shows the speed of delivery to end users.

      By using Awesome Graphs, these metrics can be found on the Bitbucket Pull Requests page.

      Pull Request Cycle TimeThe app also allows seeing the most frequent resolution times to make more accurate project estimations and spot pull requests that take very long or short to resolve.

      bitbucket pull request statistics

      Activities in Pull Requests 

      The number of comments and the level of engagement from reviewers contain valuable insights into team collaboration, code quality, and the effectiveness of the code review process. At the Awesome Graphs team, during the code review, we always consider the number of activities in each pull request, as many comments might indicate some issues in the workflow. At the same time, a reasonable amount of discussion leads to higher-quality code and creates a collaborative environment within the team.

      The activities in pull requests can be found in the Contributions Report. It shows how many comments each reviewer made in pull requests as well as how many comments each author got from reviewers.

      Activities in Pull Requests Apart from out-of-the-box reports, Awesome Graphs for Bitbucket provides the capability to export pull request data to a CSV file or via REST API to build custom reports and gain insights tailored to specific needs.

      To sum up, pull request metrics provide an overall view of what is happening in projects and within teams. Choosing the most important ones for you and looking at them holistically will help you make informed decisions based on well-rounded insights, evaluate the code review process, and improve product delivery. Incorporating Awesome Graphs for Bitbucket into your software development workflow will make tracking these metrics much easier and more efficient.

      Related posts

        How to Do a Code Review in Bitbucket: Best Practices from Our Team

        November 28, 2023
        #Bitbucket#Collaboration#How To#Project management
        21 min

        In this article, we will spill the beans and share how our team conducts code review in Bitbucket to ensure smooth feature delivery and mitigate risks.

        At Stiltsoft, we have several development teams dedicated to creating apps that enhance the functionality of Atlassian products. Our development team consists of 4 full-stack engineers with accumulated software development experience of over 30 years and focuses on several apps for Jira and Bitbucket. One of them is Awesome Graphs for Bitbucket, which is used by over 1,600 companies, including Apple, Oracle, and Mastercard.

        Table of Contents

        Software Development Workflow

        The illustration of the code review process and its main principles would only be complete by delving into the software development workflow. In our team, this process is structured around the following key milestones:

        1. Solution Proposal
        2. Code writing
        3. Code review and Testing
        4. Release and Deployment

        software development workflow

        Let’s look closer at the first three stages and their peculiarities.

        Solution Proposal

        Before making significant changes to the code, we prepare a Solution Proposal to outline the planned changes and their implementation to our teammates. Significant changes include:

        • adding a completely new feature
        • making substantial changes to UI or changes that impact many files
        • changing backend components
        • integrating new tools.

        Our solution proposal consists of 4 main parts:

        1. Motivation – Why did we decide to work on this issue?
        2. Context of the problem – What is happening now?
        3. Suggested Solution – How exactly are we going to implement this?
        4. Consequences – What changes will this solution entail?

        This is extremely useful, particularly for newcomers who could spend a lot of time implementing a solution that either does not meet our internal standards or could be achieved much easier and faster as we have already resolved such issues. Apart from this, it helps the whole team to:

        • validate the solution with colleagues and avoid “tunnel vision” or reinventing the wheel
        • discuss the implementation of complex issues that are not entirely clear how to solve
        • facilitate the later code review process as teammates are already familiar with the proposed solution
        • check if the suggested solution adheres to the business requirements.

        When the Solution Proposal is ready, the author shares it with the product and development teams and organizes a call in Slack to discuss questions if necessary. These documents are stored in the Confluence product space, ensuring that our team or colleagues can access them whenever needed.

        Code Writing

        Here, we won’t dwell on how to write the code but will describe how the repositories are structured, using the example of one of our apps, Awesome Graphs for Bitbucket. Since the apps for Bitbucket Cloud and Data Center differ in functionality, we have two separate repositories for them. However, unlike the backend, the frontend for both app editions is almost identical. To avoid duplicating features, we allocated the shared part of the code into a separate third repository and connected it as a Git submodule.

        code writing

        It allows us to:

        • maintain consistency across both versions of the app
        • avoid code duplicating and implementing shared features twice
        • reduce redundancy and development efforts.

        However, this solution has its drawbacks, such as:

        1. Complexity in reviewing changes as reviewers might need to navigate between the parent repository and the submodule’s repository to understand the full context of the changes.
        2. Merge conflicts as adding a new feature to the shared code for one parent repository blocks changes for the second repo until we update the submodule.
        3. Submodule updates challenges as we don’t create a separate pull request in the subdirectory. Thus, after merging the pull request into the main repository, the author needs to remember to merge the feature branch of the shared repo to the dev branch.

        How the team tackles these challenges:

        1. We use Submodule Changes for Bitbucket to avoid complexity in reviewing changes. This tool allows us to review code changes in the subdirectory as a part of the main repository. We can leave comments and tasks on the submodule’s code in the pull request of the parent repo.
        2. Possible merge conflicts and the impact of a feature on another parent repo are considered within the Solution Proposal.
        3. We use the internal app to automate the process of merging the feature branch to the dev branch of the shared repository.

        Code Review and Testing 

        Since the details of the code review will be given in the next section, the only thing to mention here is that we create pull requests to facilitate this process in Bitbucket Data Center.

        As for testing, an important point to note is that we don’t have QA engineers. The development team carries out testing to check the code for bugs and errors, and the product team to verify compliance with business requirements. We can use tools like Playwright to automate the process if necessary. The main idea behind this approach is that if a developer writes code conscientiously and pays attention to the quality, there is no need to involve testers. The code owner will understand its behavior and find deviations much better than anyone else. The developer creates tests during code writing to cover at least 80% of the code in the pull request. This is tracked for each commit using SonarQube.

        Now that you know the main milestones of our development process, let’s move on to the main principles of the team’s code review process.

        Code Review Guidelines of Our Team

        When to Create a Pull Request

        We agreed to create a pull request only if it contains code ready for deployment so the app will remain working after its merge. When an author needs feedback on functionality that is not ready for production, alternative ways, such as comparing two branches or requesting assistance in Slack, are used.

        When there are dependent pull requests, the subsequent pull request is created only when the previous one was merged. For example, we recently enhanced one of the reports for the Awesome Graphs app. To avoid making one large pull request, we split the code changes into two parts: frontend and backend. Initially, the author created a pull request to review changes in the backend. Only after its merge did they submit the second one with frontend modifications. This is done to ensure that reviewers don’t get puzzled over determining which pull request to review first and have a clear understanding of priorities.

        In this case, two versions of one functionality may appear in the master branch. As in the example above, we had two backend versions for the same report. That’s why, to indicate our intention to remove the old functionality soon and ensure that another developer won’t start using it, we annotate methods and classes slated for removal in future pull requests with “Deprecated”. This annotation should contain a link to the feature, after which completion, the annotated element should be deleted.

        Pull Request Title and Description

        Creating a pull request is always accompanied by giving it a title and adding a description. The title should be clear and reflect the essence of code changes, e.g., Pie Report Optimization: Backend.

        The description is added so that each reviewer understands what changes will be made and why. It is similar to a short summary of the Solution Proposal and consists of 3 parts:

        • Context. An overview of the initial problem and associated constraints.
        • Solution. Key points of how we resolve the issue.
        • ResultsA brief description of what we will get after implementing this solution.

        code review guidelines

        Pull Request Reviewers

        As our development team is relatively small, we add all members to review a pull request by default. This is done to share knowledge and ensure everyone is aware of code changes. However, if a reviewer cannot participate in code review for any reason, e.g., vacation, illness, or heavy workload, they remove themselves from the list of reviewers. Later, they can rejoin code review if circumstances have changed.

        The main principle is that at least one reviewer should always be available to assess the pull request. Typically, code reviews involve 2-3 developers.

        Once all developers have approved the pull request, the author adds a product team member. This step is essential as the product team tests changes and checks their compliance with the requirements and expected results.

        PR Pickup Time & Review Rounds

        During code review, one of the crucial aspects our team considers is pull request pickup time. We define it as the time from when a reviewer is added to the pull request until their first action.

        The Service Level Agreement (SLA) for pickup time is one working day. The pull request author monitors this, and if no response is received within this time frame, they notify the team in the Slack channel. The second and subsequent reminders the author sends once every 4 hours.

        We clearly define these time frames to avoid making the author guess when they will receive feedback and to enable them to make plans and work on other tasks.

        The same rules are applied to reviewers’ pickup time after changes are made to the pull request. In this case, the author notifies reviewers by adding a comment such as “All comments addressed, please review again.”

        Reviewers, in turn, notify the author of the pull request that they finished the code review by adding “Approved” or “Needs work” status.

        We do not limit the number of review rounds but track the number of activities in each pull request, as many comments might indicate some issues in the workflow. To perform this, we have an internal app for Bitbucket, which counts comments and tasks left in PRs. If this number exceeds 30, this tool asks the author and the reviewers to answer four questions:

        1. What went well in this pull request?
        2. Why do you think there are more than 30 comments and tasks?
        3. What can be done next time to avoid this?
        4. What caused the most trouble in this pull request personally to you?

        how to do a code review

        We review these answers every two months to reflect on the processes, find bottlenecks, and optimize the workflow.

        Pull Request Size

        In pull requests, we look not only at the number of comments but also at their size. Our team believes that a great pull request should strike a balance between being small enough to be reviewed quickly by teammates and comprehensive enough to cover the planned changes or feature additions. That’s why an author of a pull request always strives to make it as simple and clean as possible.

        Moreover, we are considering working on a new report for the Awesome Graphs app. It will enable users to see pull request size distribution and track changes in PR size over time. This functionality is not currently available to our clients, but we’ve made its prototype using the team’s data. Pretty good, right?

        If this report sparked your interest, feel free to contact us. We will be glad to share the details and hear your thoughts.

        pull request size

        Code Refactoring

        When developing a new feature, the author sometimes comes across some parts in the code that they’d like to rewrite and update. This is inevitable as technologies evolve, new tools emerge, and our expertise expands. However, we agreed that refactoring is permitted within the scope of a pull request only if it influences the feature we are working on. Besides, reviewers can request to create a separate pull request for changes related to refactoring or to roll back these changes in case they distract from the review of the main functionality. This is extremely useful for large-scale refactoring as the developer will work on a particular task without investing time in non-priority ones.

        Refactoring unrelated to the current feature is postponed or scheduled for R&D Day.


        Every Friday, instead of working on regular tasks, we focus on enhancing the Developer Experience (DX) and exploring new tools to streamline and improve our processes. Here also come tasks related to refactoring and technical debt.


        Tools for Code Review in Bitbucket

        We’ve already mentioned some of these tools, but let’s put them together.

        Submodule Changes for Bitbucket – a tool that helps review the changes made to Git submodules in the Diff tab in Bitbucket. We use it for two primary purposes:

        1. To review changes in the submodule as if they are a part of the parental repository. We can leave comments and tasks on the submodule’s code within the code review process of the parental repository.
        2. To check a new merge to ensure that the submodule commit can be merged to the dev branch in fast-forward mode.

        tools for code review

        SonarQube – a tool for static code analysis and calculating the percentage of code covered by tests. It helps us analyze the code of a single pull request. We have installed the Include Code Quality for Bitbucket plugin, which displays the SonarQube code quality check results within the pull request. It prevents us from merging a pull request if the code quality does not meet our team’s standards:

        • There are no errors or vulnerabilities in the new code identified by SonarQube.
        • Test coverage for new code is at least 80%. This requirement encourages us to gradually increase the test coverage of the codebase, as modifying previously written code, we also write tests for it if none exist.

        TeamCity Integration for Bitbucket – a tool to run TeamCity builds from Bitbucket. We use it for:

        • Running tests and initiating SonarQube checks after each commit pushed to Bitbucket.
        • Managing the deployment of our projects to the Atlassian Marketplace and production environments. Although this is not directly related to code review in Bitbucket, it is essential to the overall development process.

        Following these code review guidelines allows us to maintain high-quality code and foster efficient collaboration, leading to better, more reliable software and a more productive development environment. Apart from this, we always leave room for improvement and discuss the processes every two months to make the development of our products even more robust and effective.

        Related posts