When several researchers code the same text using the same scheme with defined codes, we would expect them to produce similar results. Calculating intercoder reliability has become a well-established procedure to safeguard consistent and intersubjective application of a developed coding scheme, with respective scores serving as a quality criterion for the reliability of the coding process.

Deen Freelon has created ReCal, a great online tool that researchers can use to calculate reliability in multi-coder content analysis. It works with different measurement levels (nominal, ordinal, interval, or ratio-level) as well as with different numbers of coders and – most recently – even with missing data. Thank you, Deen, for providing this highly valuable tool!
1 Comment