Header

UZH-Logo

Maintenance Infos

Reliability, validity, and reader acceptance of LI-RADS-An In-depth analysis


Barth, Borna K; Donati, Olivio F; Fischer, Michael A; Ulbrich, Erika J; Karlo, Christoph A; Becker, Anton; Seifert, Burkhard; Reiner, Caecilia S (2016). Reliability, validity, and reader acceptance of LI-RADS-An In-depth analysis. Academic Radiology, 23(9):1145-1153.

Abstract

RATIONALE AND OBJECTIVES: This study aimed to analyze interreader agreement and diagnostic accuracy of Liver Imaging Reporting and Data System (LI-RADS) in comparison to a nonstandardized 5-point scale and to assess reader acceptance of LI-RADS for clinical routine.
MATERIALS AND METHODS: Eighty-four consecutive patients at risk for hepatocellular carcinoma who underwent liver magnetic resonance imaging were included in this Health Insurance Portability and Accountability Act-compliant retrospective study. Four readers rated the likelihood of hepatocellular carcinoma for 104 liver observations using LI-RADS criteria and a 5-point Likert scale (LIKERT) based on subjective impression in two separate reading sessions. Interreader agreement was assessed using kappa statistics (κ). Diagnostic accuracy was assessed with receiver operating characteristic analysis. Reader acceptance was evaluated with a questionnaire. A sub-analysis of LI-RADS's major features (arterial phase hyper-enhancement, washout, capsule appearance, and threshold growth) and scores for lesions </>1.5 cm was performed.
RESULTS: LI-RADS showed similar overall interreader agreement compared to LIKERT (κ, 0.44 [95%CI: 0.37, 0.52] and 0.35 [95%CI: 0.27, 0.43]) with a tendency toward higher interreader agreement for LI-RADS. Interreader agreement (κ) was 0.51 (95%CI: 0.38, 0.65) for arterial phase hyper-enhancement, 0.52 (95%CI: 0.39, 0.65) for washout, 0.37 (95%CI: 0.23, 0.52) for capsule appearance, and 0.50 (95%CI: 0.38, 0.61) for threshold growth. Overall interreader agreement for LI-RADS categories was similar between observations <1.5 cm and observations >1.5 cm. Overall diagnostic accuracy for LIKERT and LI-RADS was comparable (area under the receiver operating characteristic curve, 0.86 and 0.87). Readers fully agreed with the statement "A short version of LI-RADS would facilitate the use in clinical routine" (median, 5.0; interquartile range, 2.25).
CONCLUSIONS: LI-RADS showed similar interreader agreement and diagnostic accuracy compared to nonstandardized reporting. However, further reduction of complexity and refinement of imaging features may be needed.

Abstract

RATIONALE AND OBJECTIVES: This study aimed to analyze interreader agreement and diagnostic accuracy of Liver Imaging Reporting and Data System (LI-RADS) in comparison to a nonstandardized 5-point scale and to assess reader acceptance of LI-RADS for clinical routine.
MATERIALS AND METHODS: Eighty-four consecutive patients at risk for hepatocellular carcinoma who underwent liver magnetic resonance imaging were included in this Health Insurance Portability and Accountability Act-compliant retrospective study. Four readers rated the likelihood of hepatocellular carcinoma for 104 liver observations using LI-RADS criteria and a 5-point Likert scale (LIKERT) based on subjective impression in two separate reading sessions. Interreader agreement was assessed using kappa statistics (κ). Diagnostic accuracy was assessed with receiver operating characteristic analysis. Reader acceptance was evaluated with a questionnaire. A sub-analysis of LI-RADS's major features (arterial phase hyper-enhancement, washout, capsule appearance, and threshold growth) and scores for lesions </>1.5 cm was performed.
RESULTS: LI-RADS showed similar overall interreader agreement compared to LIKERT (κ, 0.44 [95%CI: 0.37, 0.52] and 0.35 [95%CI: 0.27, 0.43]) with a tendency toward higher interreader agreement for LI-RADS. Interreader agreement (κ) was 0.51 (95%CI: 0.38, 0.65) for arterial phase hyper-enhancement, 0.52 (95%CI: 0.39, 0.65) for washout, 0.37 (95%CI: 0.23, 0.52) for capsule appearance, and 0.50 (95%CI: 0.38, 0.61) for threshold growth. Overall interreader agreement for LI-RADS categories was similar between observations <1.5 cm and observations >1.5 cm. Overall diagnostic accuracy for LIKERT and LI-RADS was comparable (area under the receiver operating characteristic curve, 0.86 and 0.87). Readers fully agreed with the statement "A short version of LI-RADS would facilitate the use in clinical routine" (median, 5.0; interquartile range, 2.25).
CONCLUSIONS: LI-RADS showed similar interreader agreement and diagnostic accuracy compared to nonstandardized reporting. However, further reduction of complexity and refinement of imaging features may be needed.

Statistics

Citations

1 citation in Web of Science®
1 citation in Scopus®
Google Scholar™

Altmetrics

Downloads

0 downloads since deposited on 27 May 2016
0 downloads since 12 months

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:04 Faculty of Medicine > University Hospital Zurich > Clinic for Diagnostic and Interventional Radiology
04 Faculty of Medicine > Balgrist University Hospital, Swiss Spinal Cord Injury Center
04 Faculty of Medicine > Epidemiology, Biostatistics and Prevention Institute (EBPI)
Dewey Decimal Classification:610 Medicine & health
Date:9 May 2016
Deposited On:27 May 2016 15:40
Last Modified:26 Feb 2017 08:40
Publisher:Elsevier
ISSN:1076-6332
Publisher DOI:https://doi.org/10.1016/j.acra.2016.03.014
PubMed ID:27174029

Download

Preview Icon on Download
Content: Published Version
Language: English
Filetype: PDF - Registered users only
Size: 1MB
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations