/
Image Quality in Digital Pathology Image Quality in Digital Pathology

Image Quality in Digital Pathology - PowerPoint Presentation

SultrySiren
SultrySiren . @SultrySiren
Follow
343 views
Uploaded On 2022-07-28

Image Quality in Digital Pathology - PPT Presentation

from a pathologists perspective Jonhan Ho MD MS Disclosure Image Quality definemeasure Image quality is good enough if It has a resolution of 012345 μ pixel It is captured in XYZ color spacepixel depth ID: 930817

quality image sensor resolution image quality resolution sensor monitor pixel magnification concordance good features study cases validation diagnosis µm

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Image Quality in Digital Pathology" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Image Quality in Digital Pathology

(from a pathologist’s perspective)

Jonhan

Ho, MD, MS

Slide2

Disclosure

Slide3

Slide4

Slide5

Slide6

Image Quality: define/measure

Slide7

Image quality is good enough if:

It has a resolution of 0.12345

μ

/pixel

It is captured in XYZ color space/pixel depth

It has a MTF curve that looks perfect

It has a focus quality score of 123

Has a high/wide dynamic range

Slide8

What is “resolution”?

Spatial resolution

Sampling period

Optical resolution

Sensor resolution

Monitor resolution

New Year’s resolution???????

Slide9

Optical resolution

Theoretical maximum resolution of a 0.75 NA lens is 0.41

μ

. 1.30 NA – 0.23

μ

.

Has NOTHING to do with magnification! (we will get to that later.)

Slide10

Depth of Field

As aperture widens

Resolution improves

Depth of field narrows

Less tissue will be in focus

Slide11

Image quality is good enough if:

It has a resolution of 0.12345

μ

/pixel

It is captured in XYZ color space/pixel depth

It has a MTF curve that looks perfect

It has a focus quality score of 123

Has a high/wide dynamic range

Slide12

Image quality is good enough if it is:

“Sharp”

“Clear”

“Crisp”

“True”

“Easy on the eyes”

Slide13

Image quality is good enough if it is:

“Sharp”

“Clear”

“True”

Slide14

Image quality is good enough if:

You can see everything you can see on a glass slide

Slide15

Slide16

Slide17

Image quality is good enough if:

I can make a diagnosis from it

Slide18

Slide19

Slide20

Image quality is good enough if:

I can make as good a diagnosis from it as I can glass slides.

This is a concordance study

OK, but how do you measure this?!?!?!?!?!

Slide21

Gold standard = Another Diagnosis

Slide22

Concordance validation

Some intra-observer variability

Even more

interobserver

variability

Order effect

“great case” effect

Slide23

Concordance validation

Case selection

Random, from all benches?

Enriched, with difficult cases?

Presented with only initial H&E?

Allow ordering of levels, IHC, special stains?

If so, how can you compare with the original diagnosis?

Presented with all previously ordered stains?

If so, what about diagnosis bias?

How old of a case to allow?

Slide24

Concordance validation

Subject selection

Subspecialists? Generalists?

Do all observers read all cases, even if they are not accustomed to reading those types of cases?

Multi-institutional study

Do observers read cases from other institutions?

Staining/cutting protocol bias

Slide25

Concordance validation

Measuring concordance

Force pathologist to report in discrete data elements?

This is not natural! (especially in inflammatory processes!)

What happens if 1 data element is minimally discordant?

Allow pathologist to report as they normally do?

Free text – who decides if they are concordant? How much discordance to allow? What are the criteria?

Slide26

Concordance study bottom line

Very difficult to do with lots of noise

Will probably conclude that can make equivalent diagnoses

At the end, we will have identified cases that are discordant, but what does that mean?

What caused the discordances?

Bad images? If so what made them bad?

Familiarity with digital?

Lack of coffee?!?!?!

Still doesn’t feel like we’ve done our due diligence – what exactly are the differences between glass and digital?

Slide27

PERCEPTION = REALITY

Slide28

PERCEPTION = QUALITY

“Sharp, clear, true”

Slide29

Psychophysics

The study of the relationship between the physical attributes of the stimulus and the psychological response of the observer

Slide30

What we need is -

Slide31

Images, image quality and observer performance: new horizons in radiology lecture.

Kundel

HL. Radiology. 1979 Aug;132(2):265-71

Slide32

Kundel on image quality

“The highest quality image is one that enables the

observer

to most accurately report diagnostically relevant

structures

and

features

.”

Slide33

Slide34

Receiver Operator Curve (ROC)

Slide35

Conspicuity index formula

K = f(Size, contrast, Edge Gradient/surround complexity)

Probability of detection = f(K)

Slide36

Kundel, 1979

“Just as a limited alphabet generates an astonishing variety of words, an equally limited number of features may generate an equally astonishing number of pictures.”

Slide37

Can this apply to pathology?

What is our alphabet? MORPHOLOGY!

Red blood cells

Identify inflammation by features

Eosinophils

Plasma cells

Hyperchromasia

,

pleomorphism

, NC ratio

Build features into microstructures and macrostructures

Put features and structures into clinical context and compare to normal context

Formulate an opinion

Slide38

Slide39

Slide40

Slide41

Slide42

Advantages of feature based evaluation

Better alleviates experience bias, context bias

Can better perform

interobserver

concordancy

Connects pathologist based tasks with measurable output understandable by engineers

Precedent in image interpretability (NIIRS)

Slide43

NIIRS 1

“Distinguish between major land use classes (agricultural, commercial, residential)”

Slide44

NIIRS 5

“Identify Christmas tree plantations”

Slide45

Disadvantages of feature based evaluation

Doesn’t eliminate the “representative ROI” problem

Still a difficult study to do

How to select features? How many?

How to determine gold standard?

What about features that are difficult to discretely characterize? (“

hyperchromasia

”, “

pleomorphism

”)

Slide46

Bottom line for validation

All of these methods must be explored as they each have their advantages and disadvantages

Technical

Diagnostic concordance

Feature vocabulary comparison

Slide47

Image perception - Magnification

Ratio

Microscope

Lens

Oculars

Scanner

Lens

Sensor resolution

Monitor resolution

Monitor distance

Slide48

40X magnification from object to sensor

1 pixel = 10 µm at the sensor

1 pixel = 0.25 µm at the sample

10/0.25 = 40X

270

µm

pixel pitch of monitor

~27X magnification from sensor to monitor

1 pixel =270 µm at the monitor

1 pixel = 10 µm at the sensor

270 / 10 = ~27X

= 1080X TOTAL magnification from object to monitor

This is the equivalent of a 108X objective on a microscope!!??

Magnification at the monitor

Slide49

Scan Type

Magnification

Effective Viewing Magnification (at

1:1)

Manual Scope Equivalent Objective Magnification

Object to Sensor

Sensor to Monitor

TOTAL

10”

24”

48”

10”

24”

48”

20X

20

27

540

540

225

112.5

~54x

~22.5x

~11.3x

40X

40

27

1080

1080

450

225

~108x

~45x

~22.5x

Near point = 10”

What if the sensor was obscenely high resolution?

Slide50

Other things that cause bad images

Tissue detection

Focus

Slide51

Tissue detection

Slide52

What about Phantoms?

Slide53

One final exercise in image perception

Slide54

Slide55

?

hoj@upmc.edu