TY - JOUR
T1 - Direct Observation vs. Video-Based Assessment in Flexible Cystoscopy
AU - Dagnaes-Hansen, Julia
AU - Mahmood, Oria
AU - Bube, Sarah
AU - Bjerrum, Flemming
AU - Subhi, Yousif
AU - Rohrsted, Malene
AU - Konge, Lars
PY - 2018
Y1 - 2018
N2 - Objective: Direct observation in assessment of clinical skills is prone to bias, demands the observer to be present at a certain location at a specific time, and is time-consuming. Video-based assessment could remove the risk of bias, increase flexibility, and reduce the time spent on assessment. This study investigated if video-based assessment was a reliable tool for cystoscopy and if direct observers were prone to bias compared with video-raters. Design: This study was a blinded observational trial. Twenty medical students and 9 urologists were recorded during 2 cystoscopies and rated by a direct observer and subsequently by 2 blinded video-raters on a global rating scale (GRS) for cystoscopy. Both intrarater and interrater reliability were explored. Furthermore, direct observer bias was explored by a paired samples t-test. Results: Intrarater reliability calculated by Pearson's r was 0.86. Interrater reliability was 0.74 for single measure and 0.85 for average measures. A hawk-dove effect was seen between the 2 raters. Direct observer bias was detected when comparing direct observer scores to the assessment by an independent video-rater (p < 0.001). Conclusion: This study found that video-based assessment was a reliable tool for cystoscopy with 2 video-raters. There was a significant bias when comparing direct observation with blinded video-based assessment.
AB - Objective: Direct observation in assessment of clinical skills is prone to bias, demands the observer to be present at a certain location at a specific time, and is time-consuming. Video-based assessment could remove the risk of bias, increase flexibility, and reduce the time spent on assessment. This study investigated if video-based assessment was a reliable tool for cystoscopy and if direct observers were prone to bias compared with video-raters. Design: This study was a blinded observational trial. Twenty medical students and 9 urologists were recorded during 2 cystoscopies and rated by a direct observer and subsequently by 2 blinded video-raters on a global rating scale (GRS) for cystoscopy. Both intrarater and interrater reliability were explored. Furthermore, direct observer bias was explored by a paired samples t-test. Results: Intrarater reliability calculated by Pearson's r was 0.86. Interrater reliability was 0.74 for single measure and 0.85 for average measures. A hawk-dove effect was seen between the 2 raters. Direct observer bias was detected when comparing direct observer scores to the assessment by an independent video-rater (p < 0.001). Conclusion: This study found that video-based assessment was a reliable tool for cystoscopy with 2 video-raters. There was a significant bias when comparing direct observation with blinded video-based assessment.
KW - cystoscopy
KW - interrater variability
KW - rater-based assessment
KW - surgical education
KW - video recording
U2 - 10.1016/j.jsurg.2017.10.005
DO - 10.1016/j.jsurg.2017.10.005
M3 - Journal article
C2 - 29102559
AN - SCOPUS:85033471752
SN - 1931-7204
VL - 75
SP - 671
EP - 677
JO - Journal of Surgical Education
JF - Journal of Surgical Education
IS - 3
ER -