Abstract: Human decision-makers frequently override the recommendations generated by predictive algorithms, but it is unclear whether these discretionary overrides add valuable private information or reintroduce the human biases and mistakes that motivated the adoption of the algorithms in the first place. We develop new quasi-experimental tools to measure the impact of human discretion over an algorithm, even when the outcome of interest is only selectively observed, in the context of bail decisions. We find that 90% of the judges in our setting generally underperform the algorithm when making a discretionary override, with most judges making override decisions that are no better than random. Yet the remaining 10% of judges outperform the algorithm in terms of both accuracy and fairness when making a discretionary override. We provide suggestive evidence on the behavior underlying these differences in judge performance, showing that the high-performing judges are more likely to use relevant private information and less likely to overreact to highly-salient events compared to the low-performing judges.