Why don't you display the *total* of judging scores on an entry?

In the leaderboard, Award Force will calculate and display average scores across all judges for each entry and not the total of all judges' scores. This is intentional and carefully considered as offering greater integrity for your judging.

If a panel of judges are asked to judge a group of entries, there is always the possibility that for example:

  • A judge abstains from an entry due to conflict of interest
  • A judge is recused from an entry
  • A judge runs short of time and is unable to finish judging all entries
  • A judge overlooks a score and doesn't finish scoring an entry

In any of these cases, relying on a total of all judges' scores unfairly favours entries that have had more judges record their scores.

On the other hand an average of all judges' scores treats all entries fairly even with results from a different quantity of judges. If entry A is scored by 9 judges and entry B by 10 judges— the two entries can be fairly compared based on the average score of the judges. On the other hand, using the total of judges scores, entry A will score 10% less than entry B, all other things being equal.

In order to deal with uncertainty and change, average of all judges' scores is the fair and reliable approach.

To learn more about the score calculation options supported by Award Force, see our guide: What is the difference between 'Sum' and 'Mean' result calculation?

Was this article helpful?
0 out of 2 found this helpful

Articles in this section

See more