#StackBounty: #agreement-statistics #cohens-kappa Inter-rater agreement after rater 1-based sampling
Rater 1 rated (+/-) some cases. Rater 2 gets to rate all the Rater 1 + cases, but only half of the rater 1 – cases. Is there a way to account for this sampling scheme when calculating inter-rater agreement (e.g. Cohen’s kappa)?