Abstract: This piece endeavors to provide context for state and local officials considering tasks around development, procurement, implementation, and use of risk assessment tools. It begins with brief case studies of four states that adopted (or attempted to adopt) such tools early on and describes their experiences. It then draws lessons from these case studies and suggests some questions that procurement officials should ask of themselves, their colleagues who call for the acquisition and implementation of tools, and the developers who create them. This paper concludes by examining existing frameworks for technological and algorithmic fairness. The authors offer a framework of four questions that government procurers should be asking at the point of adopting RA tools. That framework draws from the experiences of the states we study and offers a way to think about accuracy (i.e., the RA tool’s ability to accurately predict recidivism), fairness (i.e., the extent to which an RA tool treats all defendants fairly, without exhibiting racial bias or discrimination), interpretability (the extent to which an RA tool can be interpreted by criminal justice officials and stakeholders, including judges, lawyers, and defendants), and operability (the extent to which an RA tool can be administered by officers within police, pretrial services, and corrections).