What We Do / How We Measure / Methodology

Methodology for Court and Prosecutor Data Overview

Measures for Justice has developed a rigorous methodology to standardize court and prosecutor data so that users can make apple-to-apple comparisons across jurisdictions in the United States. The development of the Performance Measures for the National Data Portal, and the Case Flow Measures for the Commons Data Platform is an iterative process that involves six general stages: (1) conceptual development; (2) data collection and storage; (3) data management; (4) data quality control; (5) measure calculation; and (6) measure visualization. We seek input from stakeholders at every stage of the process, which sometimes leads to changes in how the measures are calculated. Our full methodology is available for download at the link below. It is updated periodically as our Measures expand and additional data become available.

Source Data

Measures for Justice (MFJ) works with adult criminal case data extracted from administrative case management systems (CMS); we do not measure how cases in juvenile and civil courts fare. The data are originally collected by the courts and prosecutor offices for the purpose of tracking the processing of individual cases and usually involve manual data entry into the CMS. As such, they may be subject to errors at any stage of the collection and recording process. MFJ excludes unreliable values and data elements–for instance, a case filing date that is in the future, or a date field missing information in 80% of cases–from all analyses.

For the National Data Portal, MFJ may occasionally receive data with overlapping data elements from multiple agencies in the same jurisdiction. In these instances, MFJ triangulates the data to identify the most reliable source, usually the most proximate source to the data, i.e., the agency that is charged with collecting that piece of information earliest in the process. Since criminal justice agencies often feed their own databases from these most proximate sources, this method was considered the most reliable. When more than one source was used to populate different measures, the National Data Portal allows users to explore the complete list of available measures by source.

Data Processing

Statutory laws, agency practices, terminology, and case management systems vary across and within states. MFJ has developed a Standard Operating Procedure (SOP) to map all data to a uniform coding schema that allows for apples-to-apples comparisons within and across jurisdictions. This data processing SOP is used to code data for both the National Data Portal and Commons. This includes, but is not limited to:


Case
is defined as all charges associated with the same individual defendant that were filed in court (or referred for prosecution, in the case of declinations) on the same date. MFJ assumes that when a prosecutor files multiple charges together, even when they stem from separate incidents, they intend to resolve these charges simultaneously. This may differ from how each agency defines case.


Case Seriousness
is defined by the most serious charge, according to the state’s offense severity classification, that was present at each stage of charging: referral, filing, and conviction.


Charge Descriptions
are standardized using a crosswalk that ensures statutory definitions across states match a uniform code. The crosswalk was originally developed by the Bureau of Justice Statistics and it has been updated by MFJ.


Pretrial Release Decision
represents the court’s initial ruling regarding whether or not to release the defendant pending case disposition, and whether the release should be subject to monetary or nonmonetary conditions.


Case Disposition
indicates the type of action that removed the case from the prosecutor’s or the court’s docket, excluding any actions stemming from appeals or violations of probation. Case disposition categories are defined as follows:


Prosecution Rejected

Case rejections are defined differently depending on whether the criminal procedure in the jurisdiction requires law enforcement to refer the case to the prosecutor’s office for screening or whether law enforcement can file the charges directly in court without prosecutorial review:

 

Prosecutorial Screening Site

A case is defined as rejected when the prosecutor declines to file all of the charges in court.

Police Direct Filing Site

A case is defined as rejected when the prosecutor withdraws all charges filed by the police against the defendant AND the raw dataset(s) clearly indicates that the charges were withdrawn, dropped, dismissed by the prosecutor, or nolle prosequi/nolle prossed by the prosecutor. If the data do not contain such information, these charges will be classified as dismissals and tracking prosecutor withdrawals will not be possible.

No or Unknown Disposition
The case was still pending at the time of data extraction or, if it had already been closed, no disposition was recorded in the raw data.

Dismissed
Case dismissals are also defined differently depending on whether the jurisdiction allows for prosecutorial screening or requires that the police file the case directly in court:

Prosecutorial Screening Site

A case is defined as dismissed when all of the charges that were filed in court are dismissed by the court or withdrawn by the prosecutor at any point in the case.

Police Direct Filing Site

A case is defined as dismissed when all of the charges that were filed in court are dismissed by the court or withdrawn by the prosecutor after XX period of time.

Deferred or Diverted
The defendant entered a pretrial diversion or deferred prosecution program for at least one of the charges, irrespective of whether the diversion agreement took place before or after the case was filed in court, or before or after the defendant’s guilty plea.

Not Guilty at Trial
The defendant was found not guilty of all charges in a jury or bench trial.

Guilty at Trial
The defendant was found guilty of at least one charge in a jury or bench trial.

Guilty Plea
The defendant pleaded guilty to at least one charge. Guilty pleas that are required before entering a diversion program are classified as diversions.

Guilty–Unknown Method
The defendant was guilty of at least one charge but the raw data did not indicate by which method (i.e., trial vs. plea).

Transferred
The case was transferred to another jurisdiction. This includes extraditions and changes of venue.

Other
Includes other dispositions such as bond estreature and bond forfeiture.

 

Time to Case Closure is calculated in different ways, depending on the scenario:

Prosecutorial Screening Site:

Cases Rejected for Prosecution

Number of days between case reception by the prosecutor’s office and their decision not to file the case in court.

Cases Files in Court

Number of days between case filing and disposition/sentencing, and number of days between arraignment and disposition/sentencing.

Cases Resulting in Pre-Filing Diversion

Number of days between case referral and the defendant’s acceptance into a pretrial diversion program

Cases Resulting in Post-Filing Diversion

Number of days between case filing and the defendant’s acceptance into a pretrial diversion program, and/or Number of days between arraignment and the defendant’s acceptance into a pretrial diversion program.

Police Direct File Site:

Cases Rejected for Prosecution

Number of days between case filing and when the prosecutor withdraws the charges.

Cases Files in Court

Number of days between case filing and disposition/sentencing, and Number of days between arraignment and disposition/sentencing.

Cases Resulting in Diversion

Number of days between case filing and the defendant’s acceptance in a pretrial diversion program, and Number of days between arraignment and the defendant’s acceptance in a pretrial diversion program.

Attorney Type reports the last defense attorney of record and includes the following categories: self-represented, private attorney, public defender, court-appointed private attorney, and other.


Top Sentence
identifies the type of punishment imposed by the court that was the most restrictive of personal liberties according to the following hierarchy:

Death penalty, Life in prison, State prison, Jail or county detention facility, Lifetime supervision, Extended supervision/split sentence with confinement portion in prison, Extended supervision/split sentence with confinement portion in jail, Other confinement (e.g., mental health institution, home confinement), Probation, Fine, Restitution, Other (e.g., community service), Time served sentence with no additional confinement time, supervision, or fines.

Data Quality Control

MFJ goes to great lengths to ensure that the data published is as accurate as possible and that the data management process does not become a source of error. MFJ’s data quality control process is similar for Commons and the National Portal, with some differences noted below:

  1. The quality and completeness of the raw data delivered by the sources is assessed.
  2. The data are cleaned and mapped to a standardized codebook that allows for “apples-to-apples” comparisons, and invalid values and unreliable data elements are removed.
  3. The clean case-level dataset goes through several rounds of internal audits.

National Data Portal

The county-level data in the user interface are sent out to an independent external auditor, usually an expert in the state’s criminal justice data, to assess the patterns and trends in the data for face validity. The source agency and local stakeholders are also asked to review the data.

Commons

The county-level data in the user interface are validated by the source agency and their Community Advisory Board.

Measure Calculation

Measures for the National Data Portal and Commons are calculated differently because the platforms have different goals and audiences. The National Data Portal is intended to inform a primary audience of policymakers, using a closed cohort of cases (e.g., cases filed 2017-2021) to populate the expert-curated Performance Measures. By contrast, local communities are the primary audience for Commons, which presents a set of community-driven measures that track key decision points as cases flow through the process.

National Data Portal

All Performance Measures are calculated at the county level because that is where charging, disposition, and sentencing decisions are most commonly made for cases going through the state court system. They are estimated at the annual level as well as using multiple years of data (five years for most measures, and two years for those that require controlling for prior convictions). The operational definitions, case exclusions, calculations, and sources are provided in all publications of the data.

Commons

All Case Flow Measures in Commons represent a key decision made by the prosecutor’s office or the courts on a monthly, quarterly, or yearly rolling basis. Since we are measuring how all cases are processed through the system, we do not suppress any data at the measure level, unless they are unreliable. Whenever a measure experiences high missingness, insufficient cases to calculate disparities, or missingness bias, a warning is included in the visualization of the data.

It is important to note that all of MFJ’s Measures are descriptive in nature and, therefore, users will be able to observe patterns of what is happening. However, the tools do not test hypotheses about the reasons behind those patterns or the why. When our measures show differences between states, counties, or groups (e.g., racial groups), we make no claim about the cause of these differences. The measures are intended to be the starting point of a conversation by helping users ask the right questions when assessing their local systems and by inviting further examination.

Finally, each measure sheds light on a corner of the criminal justice process, but to evaluate the health of the system in a more comprehensive way, all available measures should be assessed together and interpreted with county context in mind. To that effect, the National Data Portal offers information both about the legal context for the measure within each state, and about the demographic context of each county. Commons also offers information about the county’s demographic context, in addition to information on local criminal justice resources.

Missingness and Missingness Bias

Missingness

Missing data or missing values are those that are not recorded for a given data element in the observation of interest. For instance, there are 100 records or observations (rows) in a dataset containing two data elements (columns): record ID, and disposition. The disposition field is empty or does not have disposition information for 15 cases or rows, leading to a 15 percent missingness rate. Excessive missingness, defined as more than 10 percent of values missing for a given field, is problematic because it may affect the conclusions that can be drawn from the data, especially if the values are not missing at random.

To match the type of measure and the intended goal for the tool, MFJ handles missingness slightly differently for the National Data Portal and Commons.

National Data Portal

Performance Measures with more than 10 percent of cases missing or unknown values in the numerator or in the pool of cases used to calculate the median are suppressed from publication. In addition, performance measures with more than five percent and up to 10 percent of cases with missing values display a “Missing > 5%” warning.

Commons

Users will see warnings whenever there are more than five percent (“Missing > 5%”) or more than 10 percent (“Missing > 10%”) of cases with missing or unknown values in the numerator or in the pool of cases used to calculate the median and the average. It is important to note that Commons relies on the accuracy of case event dates to be able to place a given case within the correct month for a given decision point. For this reason, missingness of any size in date fields is especially insidious in the context of Commons because the platform will be unable to report activity in the case, even if all other information is available.

Missingness Bias

Missingness bias occurs when the percentage of missing values is significantly greater than the percentage of cases that are eligible for inclusion in a given measure, making the results unreliable. MFJ uses statistical simulations to estimate the amount of bias that may result from missing or unknown data. For example, in a county where the pretrial diversion rate is low (e.g., 3%) and there is a considerable proportion of cases with missing or unknown values on the pretrial diversion field (e.g., 7%), the estimate of the pretrial diversion rate could be inaccurate. Bias is estimated as a function of the sample mean the percentage of missing or unknown data. The National Data Portal suppresses data from publication whenever the sample mean and the percentage of missing data suggest a level of bias greater than five percent.

Disparities

Relative Rate Index

Both the National Data Portal and Commons use a Relative Rate Index (RRI; also known as Rate Ratio), a concept borrowed from epidemiology, to assess disparities in case processing outcomes between white defendants and defendants of color, males and females (or other gender identifications), and indigent and non-indigent defendants. The RRI compares how two groups fare on the same outcome by dividing the results of one group by those of the other. An RRI equal to 1 indicates that there is no disparity in outcomes between the two groups. Disparities are not calculated when there are fewer than four cases in the numerator or denominator of the rate for either group. We also test the statistical significance of disparities.

 

Statistical Significance

MFJ estimates confidence intervals to test whether the disparity in outcomes for any two groups is beyond what could be expected by random chance. In this sense, statistical significance provides information about the precision and certainty of the measurement. Statistically significant disparities are noted with an asterisk (*).

To match the type of measure and the intended goal for the tool, MFJ handles missingness slightly differently for the National Data Portal and Commons.

Outliers*

MFJ uses a standard approach to calculate outliers. A county is flagged as an outlier when its value for a measure is a discernibly large distance from the values of all other counties in the state. Outliers are classified into minor and major based on the magnitude of this distance. The magnitude is calculated using the Interquartile Range (IQR), which is the difference between the 75th (3rd quartile) and 25th (1st quartile) percentiles.

 

Minor Outliers

Minor outliers are values that fall below the 1st quartile (Q1) or above the 3rd quartile (Q3) by 1.5 times the IQR.
< Q1 – (1.5 x IQR)  OR  > Q3 + (1.5 x IQR)

 

Major Outliers

Major outliers are values that fall below Q1 or above Q3 by 3 times the IQR.
< Q1 – (3 x IQR)  OR  > Q3 + (3 x IQR)

*Applies only to our Performance Measures in the National Data Portal.

State Averages*

The counties with available data must represent 50 percent or more of the state’s population for the state averages to be published. When different data sources for the same measure are used across counties within a state (e.g., County A sources “Cases Dismissed” from court data, while County B does so from prosecutor data), we calculate the statewide average using all available values for the measure. Our methodology and standard operating procedure ensure that the data are processed uniformly, irrespective of the source.

*Applies only to our Performance Measures in the National Data Portal.

Number of Cases*

At least 30 cases are needed to generate any performance measure. Performance measures for counties with fewer than 30 cases in the denominator or in the pool to calculate the median and average are suppressed from publication. Once measures have been filtered by groups (e.g., across race categories), the results are suppressed when the cell contains fewer than five (5) cases.

*Applies only to our Performance Measures in the National Data Portal.

 

Our measures and methodology have been vetted by two councils of experts: Methods and Measurement Council, and Benchmarking Council. If you have any questions about our methodology, please contact MFJ Research.