Cohen's kappa refers to a calculation that corrects for chance agreement when inter-rater reliability is measured.