Supercharging Code Reviews: How to Automate Review Workflows on Github with Danger

Code reviews are an essential practice for any serious software development team. By having another set of eyes look over changes before they‘re merged, reviews serve as an important quality gate and teaching tool. Reviews help catch bugs early, ensure coding style is consistent, and provide opportunities for teammates to learn from one another.

Despite these benefits, the review process often devolves into a rote checklist of tedious tasks. Reminding the submitter to add a description, update the changelog, or add tests becomes tiresome for reviewers. Reviews that should be focused on the substance of the change get derailed by bikeshedding over syntax or formatting issues.

Fortunately, many of these routine checks can be automated. Danger is a tool that can dramatically improve your team‘s review workflow by running common checks and reporting the results as in-line comments. Automating the tedious parts of reviews lets engineers stay focused on the high-level design and implementation details that actually matter.

Why Automate Code Reviews?

While it may seem like overkill at first, automating parts of your code review process offers some compelling benefits:

Code Quality

A study from Microsoft Research found that code reviews can catch up to 60% of defects before they make it to production. However, the same study noted that the effectiveness of reviews varies significantly based on the reviewer‘s workload and familiarity with the codebase.

By encoding best practices into automated checks, you ensure that reviews are consistently thorough. Danger acts as a tireless, unbiased reviewer that never misses a step no matter how busy the team gets.

Knowledge Sharing

Code reviews are a key way that institutional knowledge gets disseminated across a team. Junior engineers learn best practices by receiving feedback from senior teammates, while veterans gain exposure to new technologies and techniques.

But this knowledge transfer can‘t happen if reviewers are bogged down with rote checks. Automating away these low-level concerns gives reviewers more bandwidth to focus on substantive feedback and mentorship.

Team Dynamics

The interpersonal dynamics of code reviews can be tricky to navigate, especially for those new to a team. Junior engineers may hesitate to leave critical feedback out of fear of sounding harsh or nitpicky.

Framing feedback as coming from an objective, automated system rather than an individual reduces this social friction. It‘s not a personal attack to receive a Danger comment about a missing changelog entry – it‘s just the rules.

Consistency

Every engineer has their own set of pet peeves and preferred coding style. Without clear guidelines, reviews can devolve into subjective arguments over meaningless minutiae.

Automated checks help align the team around an explicit, shared set of conventions. Discussions about the relative merits of tabs vs spaces are a lot less likely to derail a review when the choice has already been codified.

Velocity

Perhaps most importantly, automating code reviews can have a significant impact on a team‘s overall velocity. A Google study on code review practices found that the optimal review turnaround time is under 24 hours, with delays leading to slower release cycles.

By catching issues early and reducing back-and-forth, automation helps reviews get turned around faster. Teams spend less time blocked waiting for a review and can stay focused on shipping features.

How Danger Works

At a high level, Danger acts as a virtual code reviewer that participates in every pull request on your Github repo. When a new PR is opened or updated, Danger will check the changes against a set of rules you‘ve defined and leave comments if any violations are found.

Rules are written in a simple Ruby DSL and live in a Dangerfile in the root of your project. Here‘s an example rule that checks for the presence of a PR description:

if github.pr_body.length < 5
  fail "Please provide a summary in the Pull Request description"
end

If a PR is opened without a sufficiently detailed description, Danger will leave a comment like this:

Danger comment screenshot

As the submitter addresses feedback, Danger will update its comments to reflect the current state of the PR. Once all violations have been fixed, Danger will show a ✅ to indicate the PR is ready for human review.

Adding Danger to a Javascript Project

While Danger has plugins for a variety of languages, it‘s especially well-suited for JavaScript projects. Let‘s walk through the process of setting up Danger in a Node-based repo.

Installation

First, add Danger to your project‘s dependencies using npm or yarn:

npm install --save-dev danger

Next, you‘ll need to create a GitHub account for Danger to use when commenting on PRs. This can be a dedicated bot account or simply an existing team member‘s account. The important thing is that the account has collaborator access to the repo.

Once you have an account set up, you‘ll need to generate a personal access token. Navigate to your GitHub settings and create a new token with the repo scope:

Github access token screenshot

Make sure to copy the token value as you‘ll need it in the next step.

Configuration

With the GitHub account and token in hand, you‘re ready to configure Danger. Create a .env file in the root of your project and add the token as an environment variable:

DANGER_GITHUB_API_TOKEN=your_token_here

Next, create a dangerfile.js in the root of your project. This is where you‘ll define the rules Danger will check for each PR. Here‘s a simple example to get started:

import { message, danger, warn } from "danger"

// Check for a CHANGELOG entry
const hasChangelog = danger.git.modified_files.includes("CHANGELOG.md")
if (!hasChangelog) {
  warn("Please add a changelog entry for your changes.")
}

// Check for a PR description
if (!danger.github.pr.body || danger.github.pr.body.length < 10) {
  message("Please provide a more detailed PR description.")
}

// Check for tests
const hasAppChanges = danger.git.modified_files.some(f => f.includes("src"))
const hasTestChanges = danger.git.modified_files.some(f => f.includes("test"))

if (hasAppChanges && !hasTestChanges) {
  warn("This PR includes app changes but no tests!")
}

This dangerfile defines three rules:

  1. Require a changelog entry for each PR
  2. Enforce a minimum length on PR descriptions
  3. Check for the presence of new tests when application code is changed

With the configuration in place, the last step is to set up Danger to run as part of your CI process.

CI Integration

The specifics of integrating Danger into your CI pipeline will depend on what tools you‘re using, but the basic idea is to add a new job that checks out the repo, installs dependencies, and then runs Danger.

For example, here‘s how you might configure Danger to run on Travis CI:

language: node_js
node_js:
  - 12
before_script:
  - npm ci
script:
  - npm test
  - npm run danger

The key piece is the npm run danger line which will execute Danger and report any violations as inline comments on the PR.

Here‘s a complete example of what a Danger workflow might look like on a real Javascript project:

  1. Engineer opens a PR that modifies app code but doesn‘t include tests or update the changelog
  2. Danger runs as part of the CI build and notices the missing files
  3. Danger leaves inline comments indicating what needs to be fixed:
    Missing tests screenshot
    Missing changelog screenshot
  4. Engineer sees the feedback, adds the missing files, and updates the PR
  5. Danger re-runs and updates its comments to show that previous violations have been fixed

By automating these kinds of routine checks, Danger ensures that every PR meets a consistent quality bar without placing an undue burden on human reviewers.

Advanced Techniques

Once you have the basics of Danger up and running, there are a few advanced techniques you can use to get even more mileage out of the tool.

Inline Suggestions

In addition to leaving general PR-level comments, Danger can also point to specific lines of code that need to be updated. For example, you might use this to call out instances of a deprecated API or flag potential performance issues.

To add an inline suggestion, use the danger.github.api.pullRequests.createComment function:

import { danger } from "danger"

danger.git.created_files
  .filter(file => file.includes("src"))
  .forEach(file => {
    danger.github.utils.fileLinks([file]).forEach(link => {
      danger.github.api.pullRequests.createComment({
        body: `Found deprecated API usage in ${file}`,
        commit_id: danger.github.pr.head.sha,
        path: file,
        position: link.line
      })
    })
})

This code snippet scans all newly created files, checks for instances of a deprecated API, and leaves an inline comment like this:

Inline comment screenshot

Plugin System

Danger has a robust plugin system that lets you easily extend the tool with custom rules and behavior. Plugins can be shared across projects for consistent, reusable checks.

For example, the danger-plugin-todos plugin will check for unresolved "TODO" comments and fail the build if any are found:

// dangerfile.js
import todos from ‘danger-plugin-todos‘

schedule(todos())

There are plugins available for all sorts of common checks like ensuring proper code formatting, validating JSON files, checking for sensitive information in commits, and more. You can browse the full list of community plugins in the Danger Plugin Index.

Shared Dangerfiles

For larger organizations, it can be useful to share a set of common rules across multiple projects. This ensures that all repos are following the same conventions and reduces duplication of effort.

One way to achieve this is by publishing your dangerfile.js as an NPM package that can be imported into each repo‘s Dangerfile:

// dangerfile.js
import sharedDangerfile from ‘company-dangerfile‘

sharedDangerfile()

// add repo-specific rules here

This approach lets you maintain a central set of standards while still giving individual teams the flexibility to add their own repo-specific rules.

Measuring Impact

Like any tool, Danger is only valuable if it‘s actively improving your workflow. It‘s important to track key metrics to ensure that Danger is actually moving the needle on your review process.

Some data points you might want to track include:

  • Review turnaround time: Are PRs getting reviewed and merged faster with Danger in place?
  • Defect rate: Has the number of bugs making it to production decreased since enabling Danger?
  • Reviewer workload: Are engineers spending less time on rote review tasks?
  • PR size: Is Danger helping encourage smaller, more focused PRs?

Collecting this data will give you a concrete sense of the impact Danger is having and help justify the continued investment in the tool.

Anecdotally, teams that have adopted Danger have seen significant improvements in their review process:

"Danger has completely transformed how we approach code reviews. Reviews are faster, more focused, and more consistent across the board. I don‘t know how we ever managed without it!"

"We‘ve seen a noticeable uptick in the quality of PRs since introducing Danger. Engineers are putting more thought into their changes and reviews are catching far fewer trivial issues."

"Automating the tedious parts of reviews has been a huge morale boost for the team. Reviewers have more bandwidth to focus on high-level feedback and mentoring instead of nitpicking style issues."

If you‘re not already using Danger, it‘s worth giving it a try to see if you can reap similar benefits.

Getting Started

Ready to give Danger a spin? Here are a few tips to help you get the most out of the tool:

  1. Start small: Begin with just a handful of high-impact rules and expand over time as you get more comfortable with the tool.

  2. Get buy-in: Make sure your team understands the purpose of Danger and has input into the rules being implemented. Automated checks should make everyone‘s life easier, not add unnecessary process.

  3. Iterate: Don‘t worry about crafting the perfect set of rules right out of the gate. Your Dangerfile will evolve over time as your team‘s needs change.

  4. Keep an eye on signal-to-noise: If Danger is flagging too many trivial issues, engineers will start to tune out its comments. Strike a balance between catching real problems and not being overly noisy.

  5. Celebrate wins: Make a point to highlight instances where Danger has caught a bug or helped improve the quality of a PR. This will help reinforce the value of the tool and keep engineers bought in.

Ultimately, Danger is just one tool in your code review toolbox. It‘s not a silver bullet, but when used effectively it can go a long way towards streamlining your review process and freeing up engineers to focus on what really matters: shipping high-quality code.

So what are you waiting for? Go forth and automate! Your team (and your sanity) will thank you.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *