Interrater reliability (or Interjudge reliability) refers to the level of agreement between two (2) or more raters who have evaluated the same individual independently.

Agreement can refer to consensus on behaviors, attributes, and so on. Moreover, it is the degree of agreement among raters about their observations of an individual or individuals.


Other definition:
The level of agreement between at least two raters who have evaluated the same patient independently. Agreement can refer to consensus on symptoms assigned, diagnoses assigned, and so on.