RDEL #78: Does removing author identity in code reviews reduce bias?
A look at findings from researchers on whether removing author identity from code reviews reduces bias or impacts overall efficiency.
Welcome back to Research-Driven Engineering Leadership. Each week, we pose an interesting topic in engineering leadership and apply the latest research in the field to drive to an answer.
Code reviews are an extremely important part of the software development process, but they are not immune to the biases that affect human judgment. Bias can impact every aspect of our work, and in the context of code reviews, can mean that evaluations are swayed by who wrote the code rather than the quality of the code itself. This week we ask: Does removing author identity in code reviews reduce bias?
The context
Bias in code reviews can subtly influence decisions, often without reviewers realizing it. Factors such as the author’s seniority, gender, team affiliation, or previous reputation can change a perspective, impacting the objectivity of feedback.
Anonymization strategies have shown success in reducing biases in environments where decisions are based solely on merit, such as peer-reviewed research and competitive selection processes. However, the collaborative and context-rich nature of software development raises unique challenges: how does hiding author identity affect team dynamics, code quality, and review efficiency?
The research
Researchers at Google conducted a large-scale field experiment involving 5,217 code reviews by 300 software engineers. They implemented a browser extension that anonymized author information during code reviews and compared outcomes with traditional, non-anonymous reviews.
After analyzing the data, here were the key findings:
In most cases (77% of the time), reviewers knew the authors identity even when author information was anonymized.
Note: these were for “non-readability” reviews. In Google land, readability reviews are a review of language best-practices, in which there is low context. The data showed that in other low-context settings, anonymization is more successful.
Speed and quality showed limited impact.
Review velocity remained largely unchanged, with both anonymous and non-anonymous reviews showing similar active review times. Interestingly, in
Code quality, measured through subjective assessments and objective metrics like rollback rates, showed no significant difference between anonymous and non-anonymous reviews.
The benefits of anonymous reviews were a (small) increase in fairness by the reviewer.
Reviewers perceived anonymous reviews as slightly fairer, though authors did not report noticeable differences in fairness.
The costs of anonymous review were the speed of high-bandwidth communication.
Anonymity sometimes hindered quick, high-bandwidth communication, creating minor inefficiencies.
Finally, when asked to reflect on what author information is important context for a reviewer, the overwhelming top response was time zone and team, not tenure or role.

The application
Studies like this are valuable not just for identifying what works, but also for highlighting what may not be as effective as anticipated. The findings show that while anonymous code reviews can offer some benefits, particularly in specific contexts, they are not a one-size-fits-all solution. There remain areas where anonymous reviews can be useful, such as in cross-team evaluations or situations where reviewer-author dynamics may influence feedback.
However, the broader lesson here is that increasing dependability and psychological safety within engineering teams might be more impactful levers than focusing solely on anonymizing code reviews. Bias is impossible to strip away entirely, but in teams with a strong culture of collaboration their impacts can be decreased. A strong culture of collaboration comes from an environment where feedback is given constructively, and team members feel safe to share their ideas without fear of bias or judgmen. This foundation supports not just fair code reviews, but a healthier, more productive engineering organization overall.
—
Happy Research Tuesday,
Lizzie