2b or not 2b

While on the subject of quality assurance and peer review, one observation I have is that interobserver variability is a real thing that critically affects the practice of radiology. Nowhere is this more impactful than in peer review and QA. While teleradiology practices have raised the bar across the radiology community given the higher degree of scrutiny that we undergo, we are also subject to the vagaries and subjectivity that peer review entails.

For example – a liver lesion that to radiologist A is clearly a benign hemangioma, to radiologist B has some features that suggest that it is potentially or even definitely malignant. Who is to determine which is correct?

It would seem that the appropriate thing would be to recommend a follow up examination that confirms the diagnosis. However in the vast majority of cases the peer reviewing radiologist simply submits his or her peer review comments which then get filed in radiologist A’s logs as an error.

The American College of Radiology has developed a scoring system for peer review. It is currently the best system that is available to us for this purpose. However, it is still not without its share of challenges.

The system comes with a score that increases based on how obvious the miss is. Crudely put, a Grade 2 miss is a subtle miss, a Grade 3 an obvious miss and a Grade 4 the kind of miss that should never happen. There is a further categorization as A or B depending on whether or not the miss is clinically significant.

The biggest challenge to the system comes in determining what is a subtle miss versus an obvious miss. It is only natural that the radiologist who is alleged to have made the miss will see it as being subtle while the radiologist who makes the finding will see it as being obvious. Therefore other parameters such as lesion size, number of slices on which the lesion is visible, and lesion image contrast characteristics should ideally be used to further objectivize this categorization.

Additionally the decision on whether the finding is of clinical significance or not is really dependent on the environment in which the scan is reported. For example in a scan performed for a suspected leaking abdominal aortic aneurysm, which is found to be positive for the same, and is appropriately reported and communicated with a 30 minute turnaround time, if a 5 mm lung nodule is not described in the initial stat preliminary report, does that in any way impact on the clinical care of the patient?

So for any peer review scoring system to be meaningful and provide true benefit, which as I see it, translates into continuing education of radiologists which in turn leads to better patient care, it should be objective to the point where interobserver variability is largely eliminated and the clinical environment of care is appropriately addressed.

In the words of the American author and academic Michael Pollan, I think perfect objectivity is an unrealistic goal; fairness, however, is not.

And finally, on Quality Assurance. Some other issues germane to the grading of discrepancies.

Ø Does the history direct us to the finding ? For example in a situation where the history specifically says left lower quadrant pain, to miss sigmoid diverticulitis, even if subtle, would in my opinion be a more significant discrepancy than if the history provided just said abdominal pain .

Ø Was the study performed specifically to look for the given entity ? For example a CT angiogram performed specifically to evaluate for a cerebral aneurysm.

Ø Was there a technologist communication that highlights the finding? (Ultrasound, venous thrombosis).

Ø Was the finding obvious on reformats that are available ? (CT, vertebral fracture).

Ø How big was the lesion/how many slices was the finding visible on ? (CT Head, aneurysm).

As I see it there are four components of radiologic reporting

The first is Image viewing (perception): Here the

Symmetry

Continuity

Contrast

http://www.ajronline.org/doi/full/10.2214/AJR.07.3268 http://www.ncbi.nlm.nih.gov/pubmed/11818589/ http://www.ncbi.nlm.nih.gov/pubmed/18430824/

Image analysis (cognition)

Reporting (expression)

Communication of results (elocution)

The elearning session conducted by Prof Leslie Scoutt.

Scroll to Top