Pull Request Analytics: How to Visualize Cycle Time / Lead Time and Get Insights for Improvement

March 30, 2021
#How To#Bitbucket#Reporting
12 min

Cycle Time / Lead Time is one of the most important metrics for software development. It can tell a lot about the efficiency of the development process and the teams’ speed and capacity. In the previous article, we showed you how to get a detailed report with the pull request statistics and Cycle Time / Lead Time calculated on the repository level. 

Today we’ll tell you how to use this report:

  • How to visualize the pull request data.
  • What things to pay attention to.
  • What insights you can get to improve performance.

Please note that we define Cycle Time / Lead Time as the time between the developer’s first commit and the time it’s merged and will refer to it as Cycle Time throughout the article.

Analyzing your codebase

First, you need to understand the current state of affairs and how it compares to the industry standards. According to the Code Climate’s research, the industry-wide median for Cycle Time is 3.4 days, with only the top 25% managing to keep it as low as 1.8 days and the bottom 25% having a Cycle Time of 6.2 days.

© Code Climate

To get a better understanding of the development process, it might be helpful to look at the teams’ dynamics and monitor the changes over time. The following chart shows how the average Cycle Time changes month after month with a trend line, so you can see objectively whether the development process is getting faster or slower and check how your rates compare to the industry average. Follow the instructions to build this chart.

For a more precise analysis and evaluation of the current code base, you can also use the Cycle Time distribution chart that provides pull request statistics aggregated by their Cycle time value, making it easy to spot the outliers for further investigation. Learn how to build this chart.

In addition to the Cycle Time, Awesome Graphs for Bitbucket lets you analyze the pull request resolution time out-of-the-box. Using the Resolution Time Distribution report, you can see how long it takes pull requests to merge or decline, find the shortest and longest pull requests, and predict the resolution time of future pull requests with the historical data.

While Cycle Time serves as a great indicator of success and, keeping it low, you can increase the output and efficiency of your teams, it’s not diagnostic by itself and can’t really tell what you are doing right or wrong. To understand why it is high or low, you’ll need to dig deeper into the metrics it consists of. The chart below gives you a general overview of the pull requests on the repository level and shows the Cycle Time with the percentage of the stages it’s comprised of (which we’ll discuss in detail in the following paragraphs). You can build a chart like this using the Chart from Table macro, available in the Table Filter and Charts app.

Breaking down the Cycle Time

We break down Cycle Time into four stages:

  • Time to open (from the first commit to open)
  • Time waiting for review (from open to the first comment)
  • Time to approve (from the first comment to approved)
  • Time to merge (from approved to merge)

Now we’ll go through each of these stages, discussing the things to pay attention to.

Time to Open

This metric is arguably the most important of all, as it influences all the later stages and, according to the research, pull requests that open faster tend to merge faster.

Long Time to Open might indicate that the developer had to switch tasks and/or that code was rewritten, which might also result in large batch sizes. In one of the previous articles, we described how you can check the size of your pull requests in Bitbucket, so you can also use it for a deeper analysis.

One of the things you can do to improve your Time to Open is to decrease the pull request size to be no more than 200 to 400 lines of code. Thus you’ll influence each stage of the cycle, as the smaller pull requests are more likely to be reviewed more thoroughly and be approved sooner.

Time to Review

Time to Review is a great metric to understand if your teams adopted Code Review as part of the daily routine. If it’s high, then it might not be part of their habit, and you’ll need to foster this culture. Another reason might be that the pull requests are not review-friendly and the reviewers procrastinate dealing with them. You can change this, once again, by keeping the pull request size small and by writing a reasonable description so it’s easier to get started with them. If the long Time to Review rate is caused by organizational issues, then it might require reprioritization.

Time to Approve

This is the stage you don’t really want to minimize but rather make it consistent by reducing inefficiencies in the code review process. While there are many strategies for Code Review, there is hardly any industry standard for Code Review metrics, so you’ll need to focus on the organization of the process and try to find a way to get constructive feedback.

Time to Merge

Long Time to Merge might be an indicator that there are obstacles in the delivery workflow. To improve it, you need to find out if there are any blockers in the process, including manual deployment, and check if your tooling satisfies your current needs.

Wrapping up

Cycle Time’s importance is difficult to overestimate, as this metric can tell a lot about the way you work, and controlling it, you can optimize the development process and deliver faster.

Once again, we built the initial pull request report with the help of the Awesome Graphs for Bitbucket app as a data provider and used the Table Filter and Charts for Confluence app to aggregate and visualize the data.

These are just a few examples, but you can get much more even from this one report. Check out the other guides for charts based on data from Bitbucket. Share your feedback and ideas in the comments, and we’ll try to cover them in future posts.

Related posts

    Announcing New Stiltsoft Partner Program

    March 22, 2021
    #News
    2 min

    At Stiltsoft, we recognize how important both our partners and customers are, so we decided to launch our new Partner Program that will affect only the Awesome Graphs for Bitbucket app.

    We are building a new Partner Program to provide you with comprehensive training materials and resources, free app licenses, promo codes, and more. If you apply, you save time and effort of your sales team with the help of our training course and demos on demand.

    We want to inform you that Awesome Graphs for Bitbucket has Standard Atlassian Partner Discount until March 21st, 2021. From March 22nd, a 20% discount will only be available for Stiltsoft Partners.

    Other Stiltsoft apps participate in the Standard Discount scheme for Atlassian Marketplace Products.

    To get a 20% discount, you will need to become our partner. Moreover, you will get other partnership perks:

    • Free licenses
    • Demo on demand
    • Promo codes
    • Enablement materials
    • App training
    • Co-marketing activities

    To join, simply drop an email at partner@stiltsoft.com, and we will get you set up as soon as possible. For more details, please visit our website.

    We look forward to continued collaboration!

    Related posts

      How to Get the Number of Commits and Lines of Code in Pull Requests

      February 4, 2021
      #How To#Bitbucket#Reporting
      9 min

      According to the research conducted by the Cisco Systems programming team, where they tried to determine the best practices for code review, they found out that the pull request size should not include more than 200 to 400 lines of code. Keeping the size of your pull requests within these limits not only will speed up the review but also this amount of information is optimal for the brain to process effectively at a time.

      In case you’d like to analyze your current database, counting lines of code manually for each pull request could take years, so we suggest automating this process with the help of Awesome Graphs for Bitbucket and Python. This article will show you how you can build a report with pull request size statistics in terms of lines of code and commits on the repository level.

      What you will get

      As a result, you’ll get a CSV file containing a detailed list of pull requests created during the specified period with the number of commits, lines of code added and deleted in them.

      How to get it

      To get the report described above, we’ll run the following script that will make requests into the REST API, and do all the calculations and aggregation for us.

      import requests
      import csv
      import sys
      
      bitbucket_url = sys.argv[1]
      login = sys.argv[2]
      password = sys.argv[3]
      project = sys.argv[4]
      repository = sys.argv[5]
      since = sys.argv[6]
      until = sys.argv[7]
      
      get_prs_url = bitbucket_url + '/rest/awesome-graphs-api/latest/projects/' + project + '/repos/' + repository \
                  + '/pull-requests'
      
      s = requests.Session()
      s.auth = (login, password)
      
      
      class PullRequest:
      
          def __init__(self, title, pr_id, author, created, closed):
              self.title = title
              self.pr_id = pr_id
              self.author = author
              self.created = created
              self.closed = closed
      
      
      class PullRequestWithCommits:
      
          def __init__(self, title, pr_id, author, created, closed, commits, loc_added, loc_deleted):
              self.title = title
              self.pr_id = pr_id
              self.author = author
              self.created = created
              self.closed = closed
              self.commits = commits
              self.loc_added = loc_added
              self.loc_deleted = loc_deleted
      
      
      def get_pull_requests():
      
          pull_request_list = []
      
          is_last_page = False
      
          while not is_last_page:
      
              response = s.get(get_prs_url, params={'start': len(pull_request_list), 'limit': 1000,
                                            'sinceDate': since, 'untilDate': until}).json()
      
              for pr_details in response['values']:
      
                  title = pr_details['title']
                  pd_id = pr_details['id']
                  author = pr_details['author']['user']['emailAddress']
                  created = pr_details['createdDate']
                  closed = pr_details['closedDate']
      
                  pull_request_list.append(PullRequest(title, pd_id, author, created, closed))
      
              is_last_page = response['isLastPage']
      
          return pull_request_list
      
      
      def get_commit_statistics(pull_request_list):
      
          pr_list_with_commits = []
      
          for pull_request in pull_request_list:
      
              print('Processing Pull Request', pull_request.pr_id)
      
              commit_ids = []
      
              is_last_page = False
      
              while not is_last_page:
      
                  url = bitbucket_url + '/rest/api/latest/projects/' + project + '/repos/' + repository \
                      + '/pull-requests/' + str(pull_request.pr_id) + '/commits'
                  response = s.get(url, params={'start': len(commit_ids), 'limit': 25}).json()
      
                  for commit in response['values']:
                      commit_ids.append(commit['id'])
      
                  is_last_page = response['isLastPage']
      
              commits = 0
              loc_added = 0
              loc_deleted = 0
      
              for commit_id in commit_ids:
      
                  commits += 1
      
                  url = bitbucket_url + '/rest/awesome-graphs-api/latest/projects/' + project + '/repos/' + repository \
                      + '/commits/' + commit_id
                  response = s.get(url).json()
      
                  if 'errors' not in response:
                      loc_added += response['linesOfCode']['added']
                      loc_deleted += response['linesOfCode']['deleted']
                  else:
                      pass
      
              pr_list_with_commits.append(PullRequestWithCommits(pull_request.title, pull_request.pr_id, pull_request.author,
                                                                 pull_request.created, pull_request.closed, commits,
                                                                 loc_added, loc_deleted))
      
          return pr_list_with_commits
      
      
      with open('{}_{}_pr_size_stats_{}_{}.csv'.format(project, repository, since, until), mode='a', newline='') as report_file:
      
          report_writer = csv.writer(report_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
          report_writer.writerow(['title', 'id', 'author', 'created', 'closed', 'commits', 'loc_added', 'loc_deleted'])
      
          for pr in get_commit_statistics(get_pull_requests()):
              report_writer.writerow([pr.title, pr.pr_id, pr.author, pr.created, pr.closed, pr.commits, pr.loc_added, pr.loc_deleted])
      
      print('The resulting CSV file is saved to the current folder.')
      
      

      To make this script work, you’ll need to install the requests module in advance, the csv and sys modules are available in Python out of the box. Then you need to pass seven arguments to the script when executed: the URL of your Bitbucket, login, password, project key, repository name, since date, until date. Here’s an example:

      py script.py https://bitbucket.your-company-name.com login password PRKEY repo-name 2020-11-31 2021-02-01

      As you’ll see at the end of the execution, the resulting file will be saved to the same folder next to the script.

      Want more?

      The Awesome Graphs for Bitbucket app and its REST API, in particular, allow you to get much more than described here, and we want to help you to get the most of it. If you have an idea in mind or a problem that you’d like us to solve, write here in the comments or create a request in our Help Center, and we’ll cover it in future posts! In fact, the idea for this very article was brought to us by our customers, so there is a high chance that your case will be the next one. 

      Here are a few how-tos that you can read right now:

      Related posts

        How to count lines of code in Bitbucket to decide what SonarQube license you need

        October 29, 2020
        #Reporting#How To#Bitbucket
        10 min

        SonarQube is a tool used to identify software metrics and technical debt in the source code through static analysis. While the Community Edition is free and open-source, the Developer, Enterprise, and Data Center editions are priced per instance per year and based on the number of lines of code (LOC). If you want to buy a license for SonarQube, you need to count lines of code for Bitbucket projects and repositories you want to analyze. 

        Awesome Graphs for Bitbucket offers you different ways of getting this information in the Data Center and Server versions. In this post, we’ll show how you can count LOC for your Bitbucket instance, projects, or repositories, using the Awesome Graphs’ REST API resources and Python.

        How to count lines of code for the whole Bitbucket instance

        Getting lines of code statistics for an instance is pretty straightforward and will only require making one call to the REST API. Here is an example of the curl command:

        curl -X GET -u username:password "https://bitbucket.your-company-name.com/rest/awesome-graphs-api/latest/commits/statistics"

        And the response will look like this:

        {
            "linesOfCode":{
                "added":5958278,
                "deleted":2970874
            },
            "commits":57595
        }
        

        It returns the number of lines added and deleted. So, to get the total, you’ll simply need to subtract the number of deleted from the added.

        Please note that blank lines are also counted in lines of code statistics in this and the following cases.

        How to count lines of code for each project in the instance

        You can also use the REST API resource to get the LOC for a particular project, but doing this for each project in your instance will definitely take a while. That’s why we are going to automate this process with a simple Python script that will run through all of your projects, count the total LOC for each one, and then will save the list of project keys with their total LOC to a CSV file.

        The resulting CSV will look like this:

        And here is the script to get it:

        import requests
        import csv
        import sys
        
        bitbucket_url = sys.argv[1]
        bb_api_url = bitbucket_url + '/rest/api/latest'
        ag_api_url = bitbucket_url + '/rest/awesome-graphs-api/latest'
        
        s = requests.Session()
        s.auth = (sys.argv[2], sys.argv[3])
        
        def get_project_keys():
        
            projects = list()
        
            is_last_page = False
        
            while not is_last_page:
                request_url = bb_api_url + '/projects'
                response = s.get(request_url, params={'start': len(projects), 'limit': 25}).json()
        
                for project in response['values']:
                    projects.append(project['key'])
                is_last_page = response['isLastPage']
        
            return projects
        
        def get_total_loc(project_key):
        
            url = ag_api_url + '/projects/' + project_key + '/commits/statistics'
            response = s.get(url).json()
            total_loc = response['linesOfCode']['added'] - response['linesOfCode']['deleted']
        
            return total_loc
        
        
        with open('total_loc_per_project.csv', mode='a', newline='') as report_file:
        
            report_writer = csv.writer(report_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
            report_writer.writerow(['project_key', 'total_loc'])
        
            for project_key in get_project_keys():
                print('Processing project', project_key)
                report_writer.writerow([project_key, get_total_loc(project_key)])
        

        To make this script work, you’ll need to install the requests in advance, the csv and sys modules are available in Python out of the box. You need to pass three arguments to the script when executed: the URL of your Bitbucket, login, password. Here’s an example:

        py script.py https://bitbucket.your-company-name.com login password

        How to count lines of code for each repository in the project

        This case is very similar to the previous one, but this script will get the total LOC for each repository in the specified project. Here, the resulting CSV file will include the list of repo slugs in the specified project and their LOC totals:

        Counting lines of code for each repository in Bitbucket

        The script:

        import requests
        import csv
        import sys
        
        bitbucket_url = sys.argv[1]
        bb_api_url = bitbucket_url + '/rest/api/latest'
        ag_api_url = bitbucket_url + '/rest/awesome-graphs-api/latest'
        
        s = requests.Session()
        s.auth = (sys.argv[2], sys.argv[3])
        
        project_key = sys.argv[4]
        
        
        def get_repos(project_key):
            
            repos = list()
        
            is_last_page = False
        
            while not is_last_page:
                request_url = bb_api_url + '/projects/' + project_key + '/repos'
                response = s.get(request_url, params={'start': len(repos), 'limit': 25}).json()
                for repo in response['values']:
                    repos.append(repo['slug'])
                is_last_page =  response['isLastPage']
        
            return repos
        
        
        def get_total_loc(repo_slug):
        
            url = ag_api_url + '/projects/' + project_key + \
                  '/repos/' + repo_slug + '/commits/statistics'
            response = s.get(url).json()
            total_loc = response['linesOfCode']['added'] - response['linesOfCode']['deleted']
        
            return total_loc
        
        
        with open('total_loc_per_repo.csv', mode='a', newline='') as report_file:
            report_writer = csv.writer(report_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
            report_writer.writerow(['repo_slug', 'total_loc'])
        
            for repo_slug in get_repos(project_key):
                print('Processing repository', repo_slug)
                report_writer.writerow([repo_slug, get_total_loc(repo_slug)])
        

        You need to pass the URL of your Bitbucket, login, password, project key, which will look as follows:

        py script.py https://bitbucket.your-company-name.com login password PROJECTKEY

        Want to learn more?

        We should note that the total LOC we get in each case shows the number of lines added minus lines deleted for all branches. Due to these peculiarities, some repos may have negative LOC numbers, so it might be useful to look at the LOC for a default branch and compare it to the LOC for all branches.

        If you would like to learn how to get this information, write here in the comments or create a request in our Help Center, and we’ll cover it in future posts!

        You can also check how to search for commits in Bitbucket, read our blog post that suggests three different ways of how you can do this.

        Related posts

        Top 5 Reports from Jira and Bitbucket to Get the Most of Your Sprint Retrospective

        August 25, 2020
        #Confluence#Reporting#Jira#Bitbucket#How To#Confluence Tutorial
        10 min

        In the previous post How Project Managers and Scrum Masters Use Confluence for Project Monitoring, we showed how management professionals use Confluence to build a dashboard based on data from Jira and Bitbucket for project monitoring. In the present article, we move on to the second part of the dashboard. It contains reports showing what went well during the sprint and what needs to be worked on. The dashboard provides you with the visualized data for analysis during the sprint retrospective when the team can inspect itself and plan the improvements to be enacted in the following sprints. 

        Analyze what’s been done: Pull Request Activities charts

        The Pull Request Activities chart shows the number of pull requests by state and comes in two variants: pull requests grouped by a repository or by a user. 

        By looking at these charts, you can identify if there were any problems in teams working in the same repos, see how much each person managed to do, and use these insights in future sprint planning. For example, if you see there were a lot of declined pull requests during the sprint, there could be some problems in the teams’ arrangements, so this is a perfect occasion to discuss and resolve them.

        The Pull Request Activities chart shows the number of open, merged, and declined pull requests in a particular repository.
        Grouped like this, the Pull Request Activities chart shows the number of pull requests made by a particular user around the whole project.

        One more point to consider is the number of open pull requests at the end of the sprint — you need to count them in if you want to predict whether you’ll be able to complete work on time in the next sprint.

        Follow the instructions to build these charts.

        Learn, plan and improve: Pull Requests Gantt chart

        The following chart can give your team an understanding of how long the pull requests take to resolve. It can help you predict using historical Git data if your team can finish the tasks by the end of the sprint.

        The Pull Requests Gantt chart helps you see the tendencies in pull request resolution time for each user. 

        To make realistic predictions, you need to look at the average age of PRs created by the author. If a developer is junior or new to a particular repository or project, they tend to make more mistakes, or they are subjected to more thorough reviews and testing, which potentially delays their PRs, which you need to consider in your planning. Your ideal models will be the users with sets of “colorful bricks” of almost the same size, as they will probably tend to follow the accepted practice.

        One more thing that you can pay attention to is the case when a pull request or a few are closed by the very end of the sprint. It could be a sign that the author was hurrying to meet the deadline, which might be the result of review delays or just carelessness, so keep that in mind.

        Check out the guide to learn how to build this chart.

        Find out who did what: Activity graph

        Activity Graph is made to help you know what everybody was doing during the sprint in terms of commits, pull requests, Jira issues, and meetings.

        The Activity graph helps you visually compare the distribution of the workload.

        The idea behind it is that predictions based on engineering metrics are great, but even a few calls or meetings can slow down the processes. In research by Harvard Business Review, 65% of senior managers said that meetings keep their teams from completing their work. That’s why you need to look at who does what, identify bottlenecks, and manage the processes so that there are no obstacles nor reasons for delays. You can determine who is spreading themselves too thin and find those who are not actively involved. It’s evident that if you expect active development from your engineers and they are stuck in a series of meetings, it won’t work. 

        Using these metrics, you can understand why the team is moving with such speed and how the changes in the processes affect the team dynamics.

        Learn how to build Radar (Spider) chart type.

        Count it up: Velocity graph

        And the last, but not the least thing when we look back on the finished iteration, is calculating velocity. 

        The Velocity graph shows the ratio of story points committed vs. story points completed during the sprint.

        When we plan a new sprint, we should consider the information about story points performance in the previous sprints. This way, we can observe the trends, make some conclusions, and change the planning approach if needed. For example, we can calculate the average number of story points completed within one sprint (velocity) and stick to this value in the following sprints. And after that, as more data about finished sprints is accumulated, you can plan much more accurately.

        See a full guide on how to work with this graph.

        Put it all together

        The graphs and charts illustrated in this and the previous article make up the multifunctional dashboard for project management, aimed to give you reporting insights based on data from Jira and Bitbucket, which we presented in the Project Management Dashboards in Confluence webinar.

        If you would like to build similar charts and graphs on your own, try the Table Filter and Charts for Confluence and Awesome Graphs for Bitbucket apps for free.

        Related posts

        How Project Managers and Scrum Masters Use Confluence for Project Monitoring

        July 28, 2020
        #How To#Confluence Tutorial#Confluence#Jira#Reporting#Analytics#Bitbucket
        10 min

        In the Project Management Dashboards in Confluence webinar, we talked about the tools that project managers and scrum masters use, and that help them make data-driven decisions based on the data from Jira and Bitbucket. We showed how you could enhance Confluence’s default functionality to create easy-to-understand reports for management and stakeholders with all the technical and business metrics visualized on Confluence pages.

        Here we bring this information back together and provide you with the guides on how to build a dashboard where you can ensure that projects remain on track and see the actual progress compared to the project objectives stated in the plans.

        Visualize the backlog: open vs resolved issues graph

        As new tasks, features, and bugs are added continuously during the project implementation, the visualization of the dynamics in the project backlog helps spot the bottlenecks in the processes timely. Using it, you’ll be able to find inefficiencies and support the teams, whose backlogs contain more work to do than they could possibly perform.

        The Created vs Resolved Issues report shows the difference between the number of created and resolved issues over a given period and whether the overall backlog is moving towards resolution.

        Open vs Resolved Issues graph

        This chart is built using the Jira Issues macro, which pulls the data from Jira according to a JQL or a link to a filter. 

        Check out a full guide on how to build the graph.

        See the advancement in the project: Gantt chart

        Gantt chart is a tool that visualizes the development process, helps to schedule your work and track the progress. In a nutshell, it is a timeline that’s used to illustrate how the project will run. You can see:

        • what tasks are included in a project or a sprint
        • start and end dates of a project or a sprint
        • tasks duration — project schedule, i.e., start and end dates
        • who works on a particular task.
        Gantt chart

        Using this chart, you can visualize all the tasks and phases of the project to optimize task planning and distribution, so you can predict when you will deliver the product. By visualizing the dependencies and parallel processes, you’ll also be able to find critical points, such as when the tasks depending on each other are planned at the same time slot.

        We have prepared detailed instructions on how to build this kind of chart and recommend you look through the 5 Tips to Become a Gantt Chart Expert Using Atlassian Confluence article to get the most out of it.

        Track the sprint progress: burndown chart

        A burndown chart is often used in Agile project management to visualize the amount of work completed during the sprint compared to the total work, so a team can keep track of the time remaining to complete that work.

        Burndown chart

        Based on the data exported from Jira, this chart displays the total amount of work in story points that a team should complete during the sprint. An orange line is the amount of work left. A purple line displays how the sprint should run in the ideal world where the efforts are distributed equally. 

        The tasks burn down as they are marked as completed and on the last day of a sprint, no significant tasks should remain. If you see that your teams tend to fail to complete the tasks in time, you need to investigate the reasons for this issue and reduce the workload.

        Learn how to build this chart.

        Make the development process transparent: engineering metrics

        While monitoring the progress of the project, it’s necessary to see the actual change over time. Here we offer you the chart that will show the dynamics of contributions in terms of commits made by users over the chosen period. You can build a similar chart showing the pull requests dynamics and other charts based on the data from Bitbucket by feeding in the corresponding CSV file, which you can get via the Awesome Graphs for Bitbucket’s Export to CSV feature.

        Using these, you’ll be able to see the trends in pull requests and commits and find out if your team is committing more code now than before.

        Commits Dynamics chart

        During the daily meetings, teams try to spot the difficulties that appear in the processes, and these charts can bring more transparency to them. For example, if you keep your tasks between a day or two and see that one of the developers hasn’t committed in a few days, maybe it’s time to talk and find out what difficulties they might have.

        Follow the guide to build these charts.

        There’s more coming

        The graphs and charts described in this article will help you gain more visibility into the current state of the processes and make project monitoring easier. Using the Awesome Graphs for Bitbucket app as a data provider, and the Table Filter and Charts for Confluence app to aggregate and visualize the data from Bitbucket and Jira, you will get the functionality comparable to BI platforms in Confluence.

        In the next article in the series, we’ll tell you how to build the dashboard, which can be used by any agile team for a sprint retrospective.

        Watch the webinar’s recording on our YouTube channel while waiting for our next post and tell us what you think in the comment section.

        Related posts