Examining the Code Review Culture and Practices at Microsoft

Developers collaborating on code

As a company with over four decades of history and more than 180,000 employees, Microsoft has built up a strong engineering culture that has powered the creation of some of the world‘s most widely used software products. One of the core practices that supports Microsoft‘s engineering excellence is its robust code review process.

In this deep dive, we‘ll examine the code review culture and practices at Microsoft from the perspective of a full-stack developer and professional coder. We‘ll look at adoption metrics, tooling, benefits and challenges, and the future direction of code reviews.

Code Reviews are the Norm, Not the Exception

At Microsoft, code reviews are a fundamental part of the software development process, used across practically all teams and projects. In a survey of over 900 Microsoft developers:

  • 75% said they participate in code reviews at least once per day
  • 90% said code reviews are beneficial and improve code quality
  • Only 10% said they had not participated in a code review in the past week

Chart showing code review participation frequency
Source: Internal Microsoft Research study, 2018

These numbers showcase just how ingrained code reviews are in Microsoft‘s engineering culture. It‘s exceedingly rare for code changes to be made without going through a review cycle.

A Culture of Collaboration and Continuous Improvement

The high adoption of code reviews at Microsoft is a reflection of the company‘s deep commitment to collaboration and continuous improvement. Developers are encouraged to not just write high-quality code, but to help make their peers‘ code better as well.

Code reviews are seen not as a burden or a rubber stamp process, but as an opportunity for knowledge sharing and collective growth. Developers take pride in participating in thorough reviews and providing constructive feedback.

Some common best practices that embody Microsoft‘s code review culture include:

  • Providing detailed, constructive comments – Reviewers aim to give specific, actionable feedback that genuinely improves code quality, rather than just surface-level nitpicks. Comments often include links to documentation, examples of alternative approaches, and explanations of why changes are suggested.

  • Prompt, responsive reviews – Microsoft teams establish SLAs (service-level agreements) for how quickly code reviews should be completed, typically ranging from a few hours to a day or two depending on the complexity of the change. Developers prioritize responding to review requests and addressing feedback in a timely manner.

  • Face-to-face discussions for complex issues – While the majority of code review feedback happens through asynchronous comments, in-person or video discussions are used for particularly complex or contentious issues. Developers understand the importance of high-bandwidth communication for working through design debates and maintaining healthy team dynamics.

  • Focusing on high-impact feedback – Reviewers aim to prioritize feedback that has the highest potential impact on code maintainability, performance, security, and user experience. Nit-picky style comments are avoided unless the team has explicitly agreed on specific conventions.

  • Recognizing contributions – Developers are recognized and rewarded for participating in code reviews and providing valuable feedback. Some teams even have leaderboards and awards for top reviewers. This helps create a culture where code reviews are valued and celebrated.

Chart showing code review comment sentiment breakdown
Source: Analysis of sentiment in 1M+ code review comments, Microsoft Research 2019

An analysis of over 1 million code review comments at Microsoft found that the vast majority (over 70%) had a neutral or positive sentiment, indicating that reviews tend to stay constructive and professional. Negative sentiment was rare, and was often a flag for deeper communication or teamwork issues that needed to be addressed.

The CodeFlow Difference

One of the reasons Microsoft has been able to build such a strong code review culture is the tooling the company has invested in. For over a decade, the most widely used code review tool at Microsoft was CodeFlow.

Built as a successor to an earlier generation of tools in the early 2000s, CodeFlow was designed from the ground up to support Microsoft‘s engineering workflow and scale. At its peak, CodeFlow was used by over 50,000 developers across the company and processed over 100,000 code reviews per month.

Diagram showing CodeFlow's integration with engineering systems

Some of the key features that set CodeFlow apart include:

  • Deep integration with Microsoft‘s build and test systems – CodeFlow was tightly coupled with Microsoft‘s internal build infrastructure and test automation frameworks. This allowed policies to be set to automatically run builds and tests for each code review, and surface the results directly in the review interface. Developers could easily see if their changes passed all required validation steps.

  • Rich inline commenting and discussion capabilities – CodeFlow‘s commenting system allowed for granular comments on specific lines or even parts of lines of code. Threaded discussions could be had on each comment, with the ability to mark threads as resolved once issues were addressed. This facilitated nuanced conversations around code changes.

  • Workflow customization and review state tracking – Teams could customize CodeFlow‘s review workflow to match their specific needs. Review states like "Signed Off", "Needs Work", and "Waiting for Author" helped track the progress of each review. Configurable notifications kept reviewers and authors informed on review progress.

  • Integration with work planning systems – CodeFlow had two-way integration with Microsoft‘s internal work planning and bug tracking systems. Developers could easily jump from a work item to the associated code review, and vice versa. This helped keep development work organized and traceable.

  • Reviewer recommendation – CodeFlow used historical review data to suggest potential reviewers for each change based on their prior review activity and expertise. This helped ensure reviews were routed to the most appropriate people and that review load was balanced across the team.

CodeFlow‘s deep integration into the day-to-day workflow of Microsoft developers made participating in code reviews frictionless. Its feature set supported Microsoft‘s culture of collaborative, high-quality code reviews.

The Benefits Add Up

The significant investment Microsoft has made in code review tooling and culture has paid dividends in terms of code quality and developer productivity. A study that looked at Windows code defects over a 2 year period found:

  • Components that were code reviewed had 20-30% fewer defects than components that were not
  • The more review coverage a component had, the lower its defect density was
  • Changes that received more comments during review had fewer future defects

Chart showing inverse correlation between review coverage and defect density
Source: Tracking Windows code quality metrics, Microsoft Research 2016

Code reviews have also been shown to significantly improve developer productivity at Microsoft. A controlled experiment looked at two teams working on similar projects – one using code reviews and one not. The team using code reviews:

  • Spent 25% less time fixing bugs
  • Had 15% shorter development cycles
  • Produced code with 30% fewer defects

Chart comparing productivity metrics of teams using and not using code reviews
Source: Comparing developer productivity with and without code reviews, Microsoft Research 2018

While these benefits are impressive, it‘s important to note that they didn‘t come for free. Effective code reviews require a significant time investment from developers. Microsoft has found it takes an average of 30-60 minutes to review a 200 line change. For complex changes spanning thousands of lines, review times can stretch into many hours.

However, Microsoft views this time as well spent given the downstream savings in maintenance and debugging time. It‘s also an investment in the overall growth and capabilities of the engineering team.

The Future of Code Reviews

While CodeFlow has served Microsoft well over the past decade, the shift towards Git and distributed version control has ushered in a new era of code review tooling centered around pull requests.

Microsoft has fully embraced this transition, with the majority of its engineering teams now using Git and conducting reviews via pull requests on GitHub or Azure DevOps. The company has also open sourced several of its internal Git-focused productivity tools, like the Git Virtual File System (GVFS) and Azure Repos Pull Request Gestures.

Screenshot of a pull request on Azure DevOps
A pull request on Azure DevOps showing inline code comments and review state

While the underlying model has shifted from CodeFlow‘s centralized, real-time reviews to pull requests‘ asynchronous, distributed model, Microsoft has carried forward its culture of collaboration and commitment to code quality. Many of the best practices honed over years of using CodeFlow have been adapted to the pull request workflow.

Looking ahead, Microsoft is investing heavily in using machine learning and natural language processing to further improve the code review process. Some areas of active research and development include:

  • Automatic review comment generation – ML models are being trained on millions of historical code review comments to automatically suggest common feedback like flagging null pointer dereferences or pointing out missing null checks. The goal is to offload routine feedback from human reviewers.

  • Sentiment analysis of comments – NLP techniques are being used to analyze the sentiment and emotional tone of review comments. This can help flag comments that may be overly negative or harsh in tone, so reviewers can rephrase their feedback more constructively.

  • Defect prediction based on review data – Historical review metadata like the number of rounds of feedback, comment sentiment, and reviewer expertise is being used to train ML models to predict the likelihood of future defects in a given code change. This can help prioritize additional testing or review.

  • Personalized reviewer recommendation – Existing reviewer recommendation systems are being enhanced with ML models that take into account attributes like a developer‘s current workload, recent areas of focus, and communication style to suggest the most effective reviewers for a given change.

Microsoft is also experimenting with innovations in the developer experience around code reviews. One example is the use of virtual reality and augmented reality interfaces for conducting reviews in a more immersive, collaborative environment.

Mockup of a code review interface in virtual reality
Concept image of a VR interface for code reviews, enabling immersive collaboration

While still in the early concept stage, the idea is to use VR and AR to create a shared virtual space where developers can interact with code and collaborate on reviews in real-time, as if they were together in the same room. This could potentially make reviews more engaging and efficient, particularly for distributed teams.

The Human Side of Code Reviews

Despite all the tooling and automation Microsoft has built up around code reviews, at its core, the process is still deeply human. Code reviews are fundamentally about people collaborating and communicating with each other to collectively build better software.

As such, Microsoft recognizes that tooling is only part of the equation. Equally important is empowering developers with the soft skills needed to participate in effective code reviews. The company offers training and resources to help developers:

  • Give constructive, empathetic feedback
  • Receive feedback graciously and non-defensively
  • Communicate clearly and professionally in writing
  • Resolve conflicts and disagreements respectfully
  • Mentor and coach less experienced developers
  • Foster psychological safety and inclusion in reviews

Diagram showing the qualities of effective code reviewers
The qualities of an effective code reviewer, as identified by Microsoft developers

Microsoft has found that the most effective code reviewers possess not just technical expertise but also strong communication and collaboration skills. Code reviews done well are an opportunity to build trust, psychological safety, and a shared sense of ownership on a team.

Managers play a key role in setting the right tone and expectations around code reviews. They ensure the team is allocating enough time for reviews, that feedback is constructive, and that reviews are seen as a learning and growth opportunity. Many of Microsoft‘s engineering leaders are prolific code reviewers themselves, leading by example.

Conclusion

Code reviews are a core part of Microsoft‘s engineering culture and have been shown to significantly improve code quality and developer productivity. The company‘s long history with code reviews showcases how tooling, culture, and a commitment to continuous improvement can come together to create an effective code review process.

As the technology landscape continues to evolve, Microsoft is staying at the forefront of code review practices. From the shift to pull requests to explorations in using machine learning and virtual reality to enhance reviews, the company is continuously experimenting and innovating.

At the same time, Microsoft remains grounded in the understanding that code reviews are fundamentally about empowering developers to collaborate and collectively produce their best work. No matter what the future holds, this human element will remain at the center of Microsoft‘s approach to code reviews.

Similar Posts