Abstract
Purpose: Quantitative airway assessment is often performed in specific branches to enable comparison of measurements between patients and over time. Little is known on the accuracy in locating these branches. We determined inter- and intra-observer agreement of manual labeling of segmental bronchi from low-dose chest CT scans.
Methods and Materials: We selected 40 participants of the Danish Lung Cancer Screening Trial, 10 of each category: asymptomatic, mild, moderate, and severe COPD. Each subject contributed 2 CT scans with an average interval of 4 years. The airways were segmented automatically using in-house developed software. Three trained observers placed labels L1-L10 and R1-R10 in each of the images, using 3D visualization and reformatted cross-sectional views. Inter-expert agreement for each segmental bronchus for a pair of experts was defined as the percentage of images in which both experts assigned that label to the same branch. Automatic deformable image registration was used to determine corresponding branches in the two scans of the same subject. Intra-expert agreement for a bronchus was then defined as the percentage of image pairs in which the expert assigned the label to the same branch in both scans.
Results: Average inter-expert agreement was 73.9% (range 38.8%-100.0%). Agreement was lowest in the lower left lobe (55.0% for L7-L10), and largest in R6 and L6 (95.0% and 99.2%). Average intra-expert agreement was 75.4% (37.5%-100.0%).
Conclusion: We found considerable disagreement in expert labeling, possibly reflecting large anatomical heterogeneity and changes with inspiration. Consistent airway measurement cannot be guaranteed based on manual localization.
Methods and Materials: We selected 40 participants of the Danish Lung Cancer Screening Trial, 10 of each category: asymptomatic, mild, moderate, and severe COPD. Each subject contributed 2 CT scans with an average interval of 4 years. The airways were segmented automatically using in-house developed software. Three trained observers placed labels L1-L10 and R1-R10 in each of the images, using 3D visualization and reformatted cross-sectional views. Inter-expert agreement for each segmental bronchus for a pair of experts was defined as the percentage of images in which both experts assigned that label to the same branch. Automatic deformable image registration was used to determine corresponding branches in the two scans of the same subject. Intra-expert agreement for a bronchus was then defined as the percentage of image pairs in which the expert assigned the label to the same branch in both scans.
Results: Average inter-expert agreement was 73.9% (range 38.8%-100.0%). Agreement was lowest in the lower left lobe (55.0% for L7-L10), and largest in R6 and L6 (95.0% and 99.2%). Average intra-expert agreement was 75.4% (37.5%-100.0%).
Conclusion: We found considerable disagreement in expert labeling, possibly reflecting large anatomical heterogeneity and changes with inspiration. Consistent airway measurement cannot be guaranteed based on manual localization.
Original language | English |
---|---|
Publication date | 2013 |
Number of pages | 10 |
DOIs | |
Publication status | Published - 2013 |
Event | European Congress of Radiology 2013 - Vienna, Austria Duration: 7 Mar 2013 → 11 Mar 2013 |
Conference
Conference | European Congress of Radiology 2013 |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 07/03/2013 → 11/03/2013 |