FairScore

Brought to you by the Vanderbilt Moot Court Board and the Vanderbilt Law Journals

About fairscore

FairScore is a statistical program originally developed for normalizing scores in law school moot court competitions. The motivating problem is that different students face different sets of judges in such competitions, raising equity concerns over the possibility of being assigned to a particularly "lax" or "harsh" group of judges. One can imagine the problem surfacing in a variety of other competitive contexts, and the hope is that others will find FairScore helpful for their scoring issues as well.  

The statistical theory behind the program is described in Cheng & Farmer (2013). Conceptually, the program uses a model frequently used in educational testing, called a Rasch model. The model assumes that observed scores are a function of the ability of the student, the generosity of the judge, and other contextual information --- for example, whether the student was arguing for the petitioner or the respondent, or the type of room in which the argument was held. The goal is then to use the observed scores and other data to isolate the ability of each individual student, which is reported as the normalized score.

The FairScore model was developed by Ed Cheng. It was first applied during the 2012 Vanderbilt Law School Intramural Moot Court Competition by Janna Maples and Scott Farmer, and has been used by their successors for all Vanderbilt moot court competitions since, both intramural and interscholastic. In 2018, the model was applied to the Vanderbilt Law School Joint Journal Write-On Competition. The law journals intend to continue using the model in future competitions. The FairScore web-based interface was developed by Yaohai (Peter) Xu. FairScore runs using the R statistical programming language and JAGS.