/
Mid-Level Operations for Segmentation Mid-Level Operations for Segmentation

Mid-Level Operations for Segmentation - PowerPoint Presentation

linda
linda . @linda
Follow
67 views
Uploaded On 2023-05-21

Mid-Level Operations for Segmentation - PPT Presentation

1 Recall Thresholding Example original image pixels above threshold 2 3 original image kidneyjpg Image Segmentation Methods from Dhawan ch 10 Edge Detection Boundary Tracking ID: 998773

means image smooth derivative image means derivative smooth stroma epithelium hough segmentation edge space benign line cluster pixel label

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Mid-Level Operations for Segmentation" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Mid-Level Operationsfor Segmentation1

2. Recall: Thresholding Exampleoriginal image pixels above threshold2

3. 3original image kidney.jpg

4. Image Segmentation Methods from Dhawan (ch 10)Edge DetectionBoundary TrackingHough TransformThresholding (we just covered)ClusteringRegion Growing (and Splitting)Estimation-Model BasedUsing Neural Networks (we do semantic segmentation this way)4

5. We’ll look atThresholding (we just covered)Edge DetectionHough TransformClusteringUsing Neural Networks (we do semantic segmentation this way)5

6. What’s an edge?Image is a functionEdges are rapid changes in this function

7. Finding edgesCould take derivativeEdges = high response

8. To find edges, we use filtersUse masks for filtersDefine a small maskApply it to every pixel position in the image to produce a new output imageIn general, this is called filtering.We call linear filters CONVOLUTIONS (even though they are really correlations).8

9. Averaging Filters 9

10. Smooth first, then derivative

11. Smooth first, then derivative

12. Smooth first, then derivative

13. Smooth first, then derivative

14. Smooth first, then derivative

15. Smooth first, then derivative

16. Smooth first, then derivative

17. Smooth first, then derivative

18. Smooth first, then derivative

19. Smooth first, then derivative

20. Smooth first, then derivative

21. Smooth first, then derivative

22. Sobel filter! Smooth & derivative

23. 2nd derivative!Crosses zero at extrema

24. Canny Edge DetectionYour first image processing pipeline!Old-school CV is all about pipelinesAlgorithm:1. Smooth image (only want “real” edges, not noise)2. Calculate gradient direction and magnitude3. Non-maximum suppression perpendicular to edge4. Threshold into strong, weak, no edge5. Connect together componentshttp://bigwww.epfl.ch/demo/ip/demos/edgeDetector/

25. Canny CharacteristicsThe Canny operator gives single-pixel-wide images with good continuation between adjacent pixelsIt is the most widely used edge operator today; no one has done better since it came out in the late 80s. Many implementations are available.It is very sensitive to its parameters, which need to be adjusted for different application domains.25

26. Canny on Kidney 26

27. An edge is not a line...How can we detect lines ?27

28. Finding lines in an imageOption 1:Search for the line at every possible position/orientationWhat is the cost of this operation?Option 2:Use a voting scheme: Hough transform 28

29. Connection between image (x,y) and Hough (m,b) spacesA line in the image corresponds to a point in Hough spaceTo go from image space to Hough space:given a set of points (x,y), find all (m,b) such that y = mx + bxymbm0b0image spaceHough spaceFinding lines in an image29

30. Hough transform algorithmTypically use a different parameterizationd is the perpendicular distance from the line to the origin is the angle of this perpendicular with the horizontal.30

31. Hough transform algorithmBasic Hough transform algorithmInitialize H[d, ]=0for each edge point I[x,y] in the image compute gradient magnitude m and angle  H[d, ] += 13. Find the value(s) of (d, ) where H[d, ] is maximum4. The detected line in the image is given byComplexity? How do you get the lines out of the matrix?31dArray H

32. Line segments from Hough Transform32

33. ExtensionsExtension 1: Use the image gradient (we just did that)Extension 2give more votes for stronger edgesExtension 3change the sampling of (d, ) to give more/less resolutionExtension 4The same procedure can be used with circles, squares, or any other shape, How?Extension 5; the Burns procedure. Uses only angle, two different quantifications, and connected components with votes for larger one.33

34. Finding lung nodules (Kimme & Ballard)34

35. K-Means ClusteringForm K-means clusters from a set of n-dimensional vectors1. Set ic (iteration count) to 12. Choose randomly a set of K means m1(1), …, mK(1).3. For each vector xi compute D(xi , mk(ic)), k=1,…K and assign xi to the cluster Cj with nearest mean.4. Increment ic by 1, update the means to get m1(ic),…,mK(ic).5. Repeat steps 3 and 4 until Ck(ic) = Ck(ic+1) for all k.35

36. Simple Example INIT.012345678910012345678910012345678910012345678910K=2Arbitrarily choose K objects as initial cluster centerAssign each object to most similar centerUpdate the cluster meansUpdate the cluster meansreassignreassign

37. Space for K-MeansThe example was in some arbitrary 2D spaceWe don’t want to cluster in that space.We will be clustering in gray-scale space or color space.K-means can be used to cluster in any n-dimensional space.

38. K-Means Example 138

39. K-Means Example 239

40. K-Means Example 340

41. K-Means Example 441Published in 2006 IEEE Southwest Symposium on Image Analysis and Interpretation 2006Medical Image Segmentation Using K-Means Clustering and Improved Watershed AlgorithmH. P. Ng, S. Ong, K. Foong, P. Goh, W. Nowinski

42. K-Means Example 542Published in 2006 IEEE Southwest Symposium on Image Analysis and Interpretation 2006Medical Image Segmentation Using K-Means Clustering and Improved Watershed AlgorithmH. P. Ng, S. Ong, K. Foong, P. Goh, W. Nowinski

43. K-Means Example 5Superpixel clustering in breast biopsy images43benignatypiaDCIS

44. K-means VariantsDifferent ways to initialize the meansDifferent stopping criteriaDynamic methods for determining the right number of clusters (K) for a given imageThe EM Algorithm: a probabilistic formulation of K-means 44

45. Blobworld: Sample Resultsusing color, texture, and EM45

46. Semantic SegmentationInstead of grouping pixels based on color, texture or whatever propertiesTeach a classifier what important regions look like, so it can find them.This is usually done via deep learning, which we will discuss later in the course.But here’s a preview.46

47. Training Labels47background benign epithelium malignant epithelium normal stroma desmoplastic stroma secretion blood necrosis

48. Meaning of LabelsBenign Epithelium: epithelial cells from the benign and atypia categoriesMalignant Epithelium: epithelial cells from DCIS and invasive cancerNormal Stroma: normal connective tissue Desmoplastic Stroma: stroma associated with a tumorSecretion: benign substance filling the ductsNecrosis: dead cells at the center of the ducts in DCIS and invasive casesBlood: blood cellsBackground: empty areas inside ducts48

49. Superpixel + SVM-based Segmentation49color and texture histogramsno neighborhood1 neighborhood2 neighborhoodsbackground benign epithelium malignant epithelium normal stroma desmoplastic stroma secretion blood necrosisGround Truth

50. 50CNN-based Segmentation  Input ImageEncoder-DecoderEncoder-DecoderSegmentation256 PlainGround TruthMulti-Resolutionbackground benign epithelium malignant epithelium normal stroma desmoplastic stroma secretion blood necrosis

51. Supervised Tissue Label SegmentationEach superpixel is assigned a class label.Context: Two circular neighborhoodsRelatively simple modelFaster to train (~3 hours)Each pixel is assigned a class label.Context: 256x256 and 384x384 pixel patches More complex model~1 week to train on special hardware51Superpixel + SVM CNN

52. 52Mean F1-scoreSP+SVM0.40CNN0.50Results

53. Confusion Matrices53Superpixels + SVMCNN

54. Segmentation Results54RGBSVM PredictionsGround Truth LabelsCNN Predictionsbackground benign epi malignant epi normal stroma desmoplastic stroma secretion blood necrosis

55. Segmentation SummaryTissue-label segmentation is a useful abstraction.We developed a set of 8 tissue labels and collected pixel-label data from a pathologist on 58 ROIs.We trained two models: SVM and CNNCNNs performed significantly better than SVMs both quantitatively and qualitatively.55