Abstract
We study judicial in-group bias in Indian criminal courts using newly collected data on over 5 million criminal case records from 2010–2018. After classifying gender and religious identity with a neural network, we exploit quasi-random assignment of cases to judges to determine whether judges favor defendants with similar identities to themselves. In the aggregate, we estimate tight zero effects of in-group bias based on shared gender or religion, including in settings where identity may be especially salient, such as when the victim and defendant have discordant identities. Proxying caste similarity with shared last names, we find a degree of in-group bias, but only among people with rare names; its aggregate impact remains small.
This content is only available as a PDF.
© 2025 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology
2025
The President and Fellows of Harvard College and the Massachusetts Institute of Technology
You do not currently have access to this content.