Segmen tationusingeigen ectorsaunifyingview airW eiss CSDivision UCBerk eley Berk eley CA yw eisscs
93K - views

Segmen tationusingeigen ectorsaunifyingview airW eiss CSDivision UCBerk eley Berk eley CA yw eisscs

berk eley edu Abstract utomatic gr ouping and se gmentation of images r e mains a chal lenging pr oblem in c omputer vision R e ently a numb er of authors have demonstr ate dgo erformanc e on this task using metho ds that ar eb ase on eigenve ctors o

Download Pdf

Segmen tationusingeigen ectorsaunifyingview airW eiss CSDivision UCBerk eley Berk eley CA yw eisscs




Download Pdf - The PPT/PDF document "Segmen tationusingeigen ectorsaunifyingv..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "Segmen tationusingeigen ectorsaunifyingview airW eiss CSDivision UCBerk eley Berk eley CA yw eisscs"— Presentation transcript:


Page 1
Segmen tationusingeigen ectors:aunifyingview airW eiss CSDivision UCBerk eley Berk eley ,CA94720-1776 yw eiss@cs.berk eley edu Abstract utomatic gr ouping and se gmentation of images r e- mains a chal lenging pr oblem in c omputer vision. R e- ently, a numb er of authors have demonstr ate dgo erformanc e on this task using metho ds that ar eb ase on eigenve ctors of the anity matrix. These ap- pr aches ar e extr emely attr active in that they ar eb ase on simple eigende omp osition algorithms whose stabil- ity is wel l understo d. Nevertheless, the use of eigen- de omp

ositions in the c ontext of se gmentation is far fr om wel l understo d. In this p ap er we give a uni- e dtr atment of these algorithms, and show the close onne ctions b etwe en them while highlighting their dis- tinguishing fe atur es. We then pr ove r esults on eigen- ve ctors of blo ck matric es that al low us to analyze the erformanc e of these algorithms in simple gr ouping settings. Final ly, we use our analysis to motivate a variation on the existing metho ds that c ombines asp cts fr om di er ent eigenve ctor se gmentation algo- rithms. We il lustr ate our analysis with r esults on r

al and synthetic images. Human p erceiving a scene can often easily segmen it in to coheren t segmen ts or groups. There has b een a tremendous amoun t of e ort dev oted to ac hieving the same lev el of p erformance in computer vision. In man y cases, this is done b y asso ciating with eac h pixel a feature v ector (e.g. color, motion, texture, p osition) and using a clustering or grouping algorithm on these feature v ectors. erhaps the cleanest approac h to segmen ting p oin ts in feature space is based on mixture mo dels in whic one assumes the data w ere generated b ym ultiple pro- cesses

and estimates the parameters of the pro cesses and the n um b er of comp onen ts in the mixture. The assignmen t of p oin ts to clusters can then b e easily p er- formed b y calculating the p osterior probabilit yofa p oin t b elonging to a cluster. Despite the elegance of this approac h, the estimation pro cess leads to a no- toriously dicult optimization. The frequen tly used EM algorithm [3 ] often con erges to a lo cal maxim um that dep ends on the initial conditions. Recen tly ,a n um b er of authors [11 , 10, 8, 9, 2] ha suggested alternativ e segmen tation metho ds that are based on

eigen ectors of the (p ossibly normalized) \anit y matrix". Figure 1a sho ws t o clusters of p oin ts and gure 1b sho ws the anit y matrix de ned y: i; j )= ;x (1) with a free parameter. In this case w eha e used ;x )= but di eren t de nition of ani- ties are p ossible. The anities do not ev en ha eto ob ey the metric axioms (e.g. [7 ]), w e will only assume that ;x )= ;x ). Note that w eha e ordered the p oin ts so that all p oin ts b elonging to the rst clus- ter app ear rst and the p oin ts in the second cluster. This helps the visualization of the matrices but do es not c hange the

algorithms | eigen ectors of p erm uted matrices are the p erm utations of the eigen ectors of the original matrix. rom visual insp ection, the anit y matrix con tains information ab out the correct segmen tation. In the next section w e review four algorithms that lo ok at eigen ectors of anit y matrices. W e sho w that while seemingly quite di eren t, these algorithms are closely related and all use dominan t eigen ectors of matrices to p erform segmen tation. Ho ev er, these approac hes use di eren t matrices, fo cus on di eren t eigen ectors and use a di eren t metho d of going from the

con- tin uous eigen ectors to the discrete segmen tation. In section 2 w e pro e results on eigendecomp ositions of blo c k matrices and use these results to analyze the b e- ha vior of these algorithms and motiv ate a new h ybrid algorithm. Finally , in section 3 w e discuss the appli- cation of these algorithms to anit y matrices deriv ed from images.
Page 2
0.5 −0.4 −0.2 0.2 0.4 10 12 14 16 18 20 10 12 14 16 18 20 10 15 20 25 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 point number first eigenvector 10 15 20 25 −2.5 −2 −1.5 −1 −0.5 0.5 point

number second generalized eigenvector 10 12 14 16 18 20 10 12 14 16 18 20 Figure 1: a. A simple clustering problem. b. The anit y matrix. c. The rst eigen ector. d. The second generalized eigen ector. e. The Q matrix. 0.5 −0.1 −0.05 0.05 0.1 0.15 10 12 14 16 18 20 10 12 14 16 18 20 10 15 20 0.221 0.222 0.223 0.224 0.225 0.226 0.227 point number first eigenvector 10 15 20 −1.5 −1 −0.5 0.5 1.5 point number second generalized eigenvector 10 12 14 16 18 20 10 12 14 16 18 20 Figure 2: a. Another simple clustering problem. b. The anit y matrix. c. The rst eigen ector.

d. The second generalized eigen ector. e. The Q matrix. −2 −1 −0.5 0.5 1.5 10 15 20 25 30 10 15 20 25 30 10 20 30 0.14 0.16 0.18 0.2 0.22 0.24 point number first eigenvector 10 20 30 −1.5 −1 −0.5 0.5 1.5 point number second generalized eigenvector 10 15 20 25 30 10 15 20 25 30 Figure 3: a. Another simple clustering problem. b. The anit y matrix. c. The rst eigen ector. d. The second generalized eigen ector. e. The Q matrix. −0.4 −0.2 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 10 15 20 25 30 35 40 45 50 10 15 20 25 30 35 40 45 50 10 15 20 25 30 35 40 45 50

10 15 20 25 30 35 40 45 50 Figure 4: a. A single frame from a scene with t o rigidly mo ving ob jects. b. The anit y matrix. c. The Q matrix.
Page 3
The algorithms 1.1 TheP eronaandF reeman(1998)algo- rithm erona and F reeman [8] suggested a clustering al- gorithm based on thresholding the rst eigen ector of the anit y matrix (throughout this pap er w e refer to the \ rst" eigen ector as the one whose corresp onding eigen alue is largest in magnitude). This is closely re- lated to an approac h suggested b y Sark ar and Bo er [9] in the con text of c hange detection. Figure 1c sho

ws the rst eigen ector of the anit matrix in gure 1b. Indeed, the eigen ector can b e used to easily separate the t o clusters. Wh y do es this metho d w ork? P erona and F reeman ha e sho wn that for blo c k diagonal anit y matrices, the rst eigen ector will ha e nonzero comp onen ts cor- resp onding to p oin ts in the dominan t cluster and ze- ros in comp onen ts corresp onding to p oin ts outside the dominan t cluster. Figure 2 sho ws that when the non- diagonal blo c ks are nonzero, the picture is a bit more complicated. Figure 2a sho ws t ov ery tigh t clusters where w eha e constrained

b oth clusters to ha e ex- actly the same n um b er of p oin ts. Figure 2b sho ws the anit y matrix with the eviden t blo c k structure. Fig- ure 2c sho ws the rst eigen ector. Note that there is no correlation b et een the comp onen ts of the eigen- alues and the correct segmen tation. Figure 3 sho ws another example where the P erona and F reeman (PF) algorithm w orks successfully 1.2 TheShiandMalik(1997)algorithm. Shi and Malik ha e argued for using a quite di er- en t eigen ector for solving these t yp e of segmen tation problems. Rather than examining the rst eigen ector of they lo ok at

generalized eigen ectors. Let be the degree matrix of i; i )= i; j (2) De ne the generalized eigen ector as a solution to: Dy (3) and de ne the second generalized eigen ector as the corresp onding to the second smal lest . Shi and Malik suggested thresholding this second generalized eigen ector of in order to cut the image in to t parts. Figure 1c and gure 2c sho w the second gener- alized eigen ector of for the t o cases. Indeed these ectors can b e easily thresholded to giv e the correct segmen tation. Wh y do es this metho d w ork? Shi and Malik ha sho wn that the second generalized eigen

ector is a so- lution to a con tin uous v ersion of a discr ete problem in whic h the goal is to minim ize: Dy (4) sub ject to the constrain t that and 1 = 0 (where 1 is the v ector of all ones). The signi cance of the discrete problem is that its solution can b e sho wn to giv ey ou the segmen tation that minim izes the normalize d cut N cut A; B )= cut A; B asso A; V cut A; B asso B; V (5) where cut(A,B)= A;j i; j ) and asso A; V )= i; j ). Th us the solution to the discrete problem nds a segmen tation that minim izes the an- it ybet een groups normalized b y the anit y within eac h group.

As Shi and Malik noted, there is no guaran tee that the solution obtained b y ignoring the constrain ts and optimizing equation 4 will b ear an y relationship to the correct discrete solution. Indeed, they sho w that the discrete optimization of equation 4 is NP-complete. Th us the connection to the discrete optimization problem do es not rigorously answ er the question of wh y the second generalized eigen ector should giv eus a go o d segmen tation. Nev ertheless, in cases when the solution to the unconstrained problem happ ens to sat- isfy the constrain ts (as in the rst t o examples), w can

infer that it is close to the constrained problems. But what of cases when the second generalized eigen- ector do esn't satisfy the constrain ts? Figure 3a sho ws an example. The second generalized eigen ector do es not ha et ov alues but it ob viously giv es v ery go o d information on the correct segmen tation (as do es the rst eigen ector). Wh y is that? Note that while P erona and F reeman use the lar gest eigen ector, Shi and Malik use the second smal lest generalized eigen ector. Th us the t o approac hes ap- p ear quite di eren t. There is, ho ev er, a closer con- nection. De ne the

normalized anit y matrix: WD (6) e call this a normalize anit y matrix follo wing [1 ]. Note that i; j )= i; j i; i j; j ) . Giv en the follo wing normalization lemma is easily sho wn: NormalizationLemma: 1. Let b e an eigen ec- tor of with eigen alue then is a general- ized eigen ector of with eigen alue 1 .2.The ector 1 is an eigen ector of with eigen alue 1. Th us the second smallest generalized eigen ector of can b e obtained b y a comp onen wise ratio of the
Page 4
second and rst lar gest eigen ectors of . The Shi and Malik (SM) algorithm th us di ers from PF in that (1) it

uses a normalize W matrix and (2) it uses the rst o eigen ectors rather than just the rst one. 1.3 The Scott and Longuet-Higgins (1990)algorithm. The Scott and Longuet-Higgins [10 ] relo calisation algorithm gets as input an anit y matrix and a um ber and outputs a new matrix calculated b y: Constructing the matrix whose columns are the rst eigen ectors of normalizing the ows of so that they ha e unit Euclidean norm. i; )= i; i; Constructing the matrix VV Segmen ting the p oin ts b y lo oking at the elemen ts of . Ideally i; j )=1ifpoin ts b elong to the same group and i; j ) = 0 if p oin ts

b elong to di eren t groups. Figures 1d{3d sho w the matrix computed b the Scott and Longuet-Higgins (SLH) algorithm for the cases surv ey ed ab o e. Note that in all cases, the i; j )en tries for p oin ts b elonging to the same group are close to 1 and those b elonging to di eren t groups are close to 0. 1.4 TheCosteiraandKanade(1995)al- gorithm Indep enden tly of the recen tw ork on using eigen ec- tors of anit y matrices to segmen t p oin ts in feature space, there has b een in terest in using singular v alues of the measuremen t matrix to segmen t the p oin ts in to rigidly mo ving b o

dies in 3D [2 ,4 ]. Although these al- gorithms seem quite di eren t from the ones discussed so far, they are in fact v ery closely related. o see the connection, w e review the Costeira and Kanade algorithm. Supp ose w e trac p oin ts in frames. The measuremen t matrix is a nx (2 ) matrix: =( XY (7) where i; j ;Y i; j ) giv e the x; y co ordinate of p oin in frame The metho d of Costeira and Kanade segmen ts these p oin ts b y taking the rst singular ectors of (where is the rank of the matrix) and putting them in to a matrix whose columns are the singular v ectors. Then constructing the

matrix y: VV (8) is a nxn matrix and ij = 0 for an yt o p oin ts that b elong to di eren t ob jects. What do es this ha e to do with eigen ectors of an- it y matrices? Recall that the singular v alues of are y de nition the eigen ectors of is a nxn y matrix that can b e though t of as an anit matrix. The anit y of p oin and is simply the inner pro duct b et een their traces ( i; i; )) and ( j; j; )). Giv en this de nition of an- it , the Costeira and Kanade algorithm is nearly iden- tical to the SLH algorithm. Figure 4 illustrates the Costeira and Kanade algorithm. Analysis of the

algorithms in simple grouping settings In this section w e use prop erties of blo c k matrices to analyze the algorithms. T o simplify notation, w assume the data has t o clusters. W e partition the matrix in to the follo wing form: (9) where and represen t the anities within the t clusters and represen ts the b et een cluster anit Our strategy in this section is to pro e results on idealized blo c k matrices and then app eal to p ertur- bation theorems on eigen ectors [6 ] to generalize the results to cases where the matrices are only appro xi- mately of this form. 2.1 Appro ximatelyconstan

tbloc ks e b egin b y assuming the matrices A; B ; C are constan t. As can b e seen from equation 1, this will b e the case when the v ariation of the within and bet een cluster dissimilarities is signi can tly smaller than .Th us i; j ) dep ends only on the mem b er- ship of p oin ts and . Note that w e do not assume that the b et een cluster anit is zero, or ev en that it is smaller than the within cluster anit Under these assumptions w e can analyze the b e- ha vior of the three algorithms exactly: Claim1: Assume i; j ) dep ends only on the mem b erships of p oin ts i; j Let b e the

indicator ector of the PF algorithm (i.e. the rst eigen ector of ). If p oin and p oin b elong to the same cluster then )= ). Claim2: Assume i; j ) dep ends only on the mem b erships of p oin ts i; j Let b e the indicator ector of the SM algorithm (the second generalized eigen ector of ). If p oin and p oin b elong to the same clusters then )= ). Claim3: Assume i; j ) dep ends only on the mem b erships of p oin ts i; j Let b e the indicator ector of the SM algorithm (the second i; j ) in the SLH algorithm with = 2 eigen ectors is equal to
Page 5
1 if p oin ts and b elong to the same

group and 0 otherwise. The pro of of these claims follo ws from the follo wing decomp osition of OS O (10) with a binary matrix indicating whose columns are mem b ership v ectors for the clusters:   (11) and S a small 2 2 matrix that con tains the constan alues of (12) Ob viously ,ifw e had an algorithm that giv en ga eus then segmen tation w ould b e trivial. Un- fortunately , the decomp osition in equation 10 is not an eigendecomp osition so standard linear algebra al- gorithms will not reco er it. Ho ev er, eigendecomp o- sition algorithms will reco er a rotation of a suitably

normalized . It can b e sho wn that if is a matrix whose t o columns are the rst t o eigen ectors of then OD where isa2 2 diagonal matrix and isa2 2 rotation matrix. Hence the claims. Note that for the PF and SM algorithms w e cannot pro e that p oin ts b elonging to di eren t clusters will ha e di eren t indicator v alues. e can only pro that p oin ts b elonging to same clusters will ha e the same v alue. Th us in gure 2c the rst eigen ector of has roughly equal v alues for all p oin ts | b oth those b elonging to the same cluster and those b elong- ing to di eren t clusters. An y visible v

ariation is due to noise. It is only for the SLH algorithm that w e can guaran tee that p oin ts b elonging to di eren t clusters will b e separated. 2.2 Non-constan tbloc kdiagonalmatrices Here w e assume that the within-cluster anities, i.e. the matrices A; B are arbitrary matrices with p os- itiv e elemen ts. The b et een-cluster anities, i.e. the matrix is assumed to b e zero. W e denote b ; the eigen alues of matrices and resp ectiv ely , or- dered b y decreasing magnitude. Claim4: Assume b et een cluster anities are zero and within cluster anities are p ositiv e. Let be the PF

indicator v ector. If > then 0 for all p oin ts b elonging to the rst cluster and )=0 for all p oin ts b elonging to the second cluster. Claim5: Assume b et een cluster anities are zero and within cluster anities are p ositiv e. Let b e the SM indicator v ector then )= ) if p oin ts i; j b elong to the same cluster. Claim6: Assume b et een cluster anities are zero and within cluster anities are p ositiv e. Let b e the SLH matrix constructed from W. If and then i; j )= 1if i; j b elong to the same cluster and zero otherwise. Claim 4 w as pro en in [8 ] and the pro of of claim 6 is

analogous: if is an eigen ector of then =( ;0) is an eigen ector of with the same eigen alue. Th us the conditions of claim 6 guaran tee that the rst t eigen ectors of will b e ( ;0) (0; ). Claim 5 fol- lo ws from the normalization lemma pro en in the pre- vious section. The v ectors ( 1; 0) and (0; 1) are b oth eigen ectors of with eigen alue 1 where ;D are the degree matrices of and .Th us the second generalized eigen ector of will b e some linear com bination of these t ov ectors m ultiplied b so it will b e constan t for p oin ts b elonging to the same cluster. Note that as in the case for

constan t blo c k matrices, for the PF and SM algorithms w e cannot guaran tee that p oin ts b elonging to di eren t clusters can b e eas- ily segmen ted. In the PF algorithm ) is guaran teed to b e p ositiv e for all p oin ts in the rst cluster, but there is no guaran tee of ho w p ositiv e. Figure 5c illus- trates this. Man y p oin ts in the \foreground" cluster ha e comp onen ts that are p ositiv ey et close to zero. In the SM algorithm, since has t o iden tical rst eigen alues, ma ybe an y linear com bination of the eigen ectors, so the di erence b et een v alues for the rst and second

cluster is arbitrary and dep ends on the implemen tation details of the eigendecomp osition algorithm. In the SLH algorithm, w e can again guar- an tee that di eren t clusters will b e segmen ted but w require an additional constrain t on the eigen alues of the blo c ks. Figure 5d sho ws what happ ens when this additional constrain t do es not hold. In this case the rst t o eigen ectors of are (0; (0; ) and the matrix do es not nd the correct segmen tation. o summarize, when the matrix has constan t blo c ks then all three algorithms will w ork, although extract- ing the discrete segmen tation

is probably easiest in the SLH algorithm. In this case, normalizing the ma- trix do es not mak ean y di erence. When the blo c ks are not constan t, ho ev er, and the b et een cluster anities are zeros, the normalization mak es a big dif- ference in that it reorders the eigen ectors. This analysis suggests a com bined (SM+SLH) al- gorithm in whic h the SLH algorithm is applied to the
Page 6
−0.6 −0.4 −0.2 0.2 0.4 0.6 −0.2 0.2 0.4 0.6 0.8 1.2 1.4 10 15 20 25 30 35 40 10 15 20 25 30 35 40 10 15 20 25 30 35 40 45 −0.2 0.2 0.4 0.6 10 20 30 40 50 −0.16

−0.14 −0.12 −0.1 −0.08 −0.06 −0.04 −0.02 0.02 10 15 20 25 30 35 40 10 15 20 25 30 35 40 Figure 5: a. Another simple clustering problem. b. The anit y matrix. c. The rst eigen ector. d. The second generalized eigen ector. e. The Q matrix of the SLH algorithm. normalized W matrix, N, rather than to the ra w an- it y matrix. Indeed, when w e run this com bined algo- rithm on the data in gure 5a the correct segmen tation is found. e summarize the prop erties of the com bined (SM+SLH) algorithm: Claim7: Assume anities are only a function of p oin t mem b

ership or assume that the b et een cluster anities are zero and within cluster anities are p osi- tiv e. Under b oth assumptions i; j ) in the com bined (SM+SLH) algorithm is one if p oin ts and b elong to the same cluster and zero otherwise. Note that in the idealized cases w eha e b een ana- lyzing, where b et een cluster anities are zero and within cluster anities are p ositiv e, then a simple connected-comp onen ts algorithm will nd the correct segmen tation. Ho ev er the p erturbation theorems of eigen ectors guaran tee that that our claims still hold with small p erturbations around

these idealized ma- trices, ev en when the b et een cluster anities are nonzero. In the follo wing section, w e sho w that our analysis for idealized matrices also predicts the b e- ha vior on anit y matrices deriv ed from images. Anit y matrices of images erona and F reeman conducted a comparison b e- een the rst eigen ector of and the second gen- eralized eigen ector of when is constructed b represen ting eac h pixel with a (p osition,in tensit y) fea- ture v ector. In their comparison, the eigen ector of had a m uc h less crisp represen tation of the cor- rect segmen tation. W eha e

found this to b e the case generally for matrices constructed in this w y from images. Figures 6{9 sho w examples. Figure 6a sho ws the baseball pla er gure from [11 ]. W e constructed a matrix using the same constan ts. Figure 6b-e sho the rst four eigen ectors of . Note that there is ery little information in these eigen ectors regarding the correct segmen tation (the pictures do not c hange when w e sho w log in tensities). Figure 6f-i sho w the rst four eigen ectors of the normalized anit y matrix . Note that at least visually all eigen ectors app ear to b e correlated with the correct

segmen tation. Ho w should this information b e reco ered? Fig- ure 7a sho ws the SM indicator v ector displa ed as an image. Although it con tains the information, it is not at all clear ho w to extract the correct segmen ts from this image | the pixels b elonging to the same ob ject do not ha e constan tv alue but rather ha e smo oth ariation. F urthermore, there is ob viously additional information in the other eigen ectors. Figure 7b sho ws a single column from the matrix constructed b y the com bined (SM+SLH) metho d with 6 eigen ectors displa ed as an image. Ideally ,if e had the correct

this column should b e all ones for a single ob ject and zeros for p oin ts not b elonging to the ob ject. Ev en for that is to o small, this column should ha e all ones for a single ob ject (but not neces- sarily zeros for the other pixels). Indeed, w e nd that the v alue is nearly one for p oin ts b elonging to the same ob ject. Figure 7c sho ws a cross-section. Note that all p oin ts corresp onding to the baseball pla er are essen- tially at 1. It is trivial to extract the baseball pla er from this represen tation. Figure 7d sho w a second column. Again, all pixels corresp onding to the

second baseball pla er are v ery close to 1. Exactly the same b eha vior is observ ed in the dancer image. The information in gure 9a is sucien tto giv e a segmen tation but it is not trivial. In the cross- section ( gure 9b) the v ariation b et een groups is similar to the v ariation within groups. Figure 9c-d sho w the ro wofthe i; j ) matrix in the com bined (SM+SLH) algorithm and the same cross-section. Ex- tracting the discrete segmen tation is trivial. Discussion Wh y do eigendecomp osition metho ds for segmen ta- tion w ork? In this pap er w eha e presen ted a uni ed view of three of

these metho ds | P erona and F ree- man [8 ], Shi and Malik [11 ] and Scott and Longuet- Higgins [10]. W e sho ed the similarities and the dif-
Page 7
10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40

50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 Figure 6: a. The baseball image from [11] b-e. The eigen ectors of the anit y matrix . Note there is v ery little correlation with the desired segmen tation. f-i. The eigen ectors of the normalized anit y matrix . Note that all eigen ectors are correlated with the desired segmen tation. 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 20 40 60 80 100 120 −0.5 0.5 x location Q(i,j) 10 20 30 40 50 60 70 80 90 100 110 10 20 30 40 50 60 70 Figure 7: a. The second

generalized eigen ector of for the baseball image. Although there is information here regarding the correct segmen tation, its extraction is non trivial. b. Aro w of the matrix in the com bined (SM+SLH) algorithm for the baseball image. Ideally all pixels corresp onding to the same ob ject should ha ev alue 1. c. A cross section through the pixels in b. Note that pixels corresp onding to the rst baseball pla er are nearly all 1. d. A di eren tro wofthe matrix. All pixels corresp onding to the second baseball pla er are 1. 20 40 60 80 100 120 140 160 180 20 40 60 80 100 120 10 20 30 40 50 60 70

80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 Figure 8: a. A gra y lev el image of a ballet dancer. b-e. The eigen ectors of the anit y matrix . Note there is v ery little correlation with the desired segmen tation. f-i. The eigen ectors of the normalized anit y matrix . Note that all eigen ectors are

correlated with the desired segmen tation. 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 20 40 60 80 100 −1.5 −1 −0.5 0.5 1.5 x location second eigenvector ((i,j) 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 20 40 60 80 100 −0.5 0.5 1.5 x location Q(i,j) Figure 9: a. The second generalized eigen ector of for the dancer image. Although there is information here regarding the correct segmen tation, its extraction is non trivial . b. A horizon tal cross section through a. Note that the v ariation bet een groups is of similar order of magnitude as the v ariation within

groups. c. Aro wofthe matrix in the com bined (SM+SLH) algorithm for the dancer image. Ideally all pixels corresp onding to the same ob ject should ha ev alue 1. d. cross section through the pixels in c. Note that pixels corresp onding to the dancer are nearly all 1.
Page 8
ferences. The similariti es are that they all use the top eigen ectors of a matrix. They di er in t ow ys | whic h eigen ectors to lo ok at and whether to nor- malize the matrix in adv ance. Using prop erties of blo c k matrices w e sho ed that when has constan blo c k structure, all three of these metho ds will

yield eigen ectors that carry some information. e also sho ed analytically the imp ortance of normalization when the matrix is blo c k diagonal with non-constan blo c ks. As suggested b y the analysis, w e found that for real images, unless the matrix is normalized in the form suggested b y Shi and Malik [11 ] it is nearly imp ossible to extract segmen tation information from the eigen ectors. In all our analysis and exp erimen ts, w e nev er found an example where using normalized rather than ra degraded p erformance. This suggested a sc heme that com bines the SM algorithm with the SLH

algorithm | w ork with eigen ectors of normalized but use the rst eigen ectors rather than just the rst t o. This is similar in spirit to the approac of [12 ] where the rst eigen ectors of ere used to de ne a new anit y matrix b et een the p oin ts. Our exp erimen tal results on real images are encourag- ing|b y using the rst eigen ectors and com bining them in to the SLH matrix w e extract a represen ta- tion that leads trivially to a discrete segmen tation. eha e also discussed a seemingly unrelated rigid b o dy segmen tation algorithm | Costeira and Kanade [2 ] and sho wn that it is nearly

iden tical to SLH with a particular de nition of anit .Itw as this con- nection that motiv ated the analysis in section 2. W an ted to generalize that t yp e of analysis for arbitrary anit y matrices. In the case of m ultib o dy rigid grouping, there has b een additional progress made b y using algorithms that do not use eigendecomp ositions but rather other, more stable matrix decomp ositions suc h as the re- duced ec helon form [4 ,5 ]. Giv en the close connection bet een the t o problems, w e are curren tly exp eri- men ting with using these alternativ e decomp ositions in the general

grouping con text. The main goal of presen ting these algorithms in a uni ed framew ork is to enable future w ork to build on the collectiv e progress made b y man y researc hers in di eren t sub elds. W e hop e that researc hin to the dicult problem of segmen tation will b ene t from the connections w eha e p oin ted out b et een the di eren algorithms. Ac kno wledgemen ts I thank W. F reeman, J. Shi, J. Malik and T. Leung for helpful commen ts and discussions. Supp orted b MURI-AR O-D AAH04-96-1-0341 References [1] F.R.K. Ch ung. Sp ctr al Gr aph The ory . American Mathematical So ciet ,

1997. [2] J. Costeira and T. Kanade. A m ultib o dy factor- ization metho d for motion analysis. In Pr c. In- ternational Conf. Computer Vision , pages 1071{ 1076, 1995. [3] A. P . Dempster, N. M. Laird, and D. B. Rubin. Maxim um lik eliho o d from incomplete data via the EM algorithm. J. R. Statist. So c. B , 39:1{ 38, 1977. [4] C.W. Gear. feature grouping in mo ving images. In Pr c IEEE workshop on motion of non-rigid and articulate d obje cts , pages 214{219, 1994. [5] C.W. Gear. m ultib o dy grouping from motion im- ages. IJCV , 29(2):133{150, 1998. [6] G.H. Golub and C.F. V an-Loan.

Matrix Compu- tations . Johns Hopkins Press, 1989. [7] D. W. Jacobs, D. W einshall, and Y. Gdaly ah u. Class represen tation and image retriev al with non-metric distances. In Pr c. International Con- fer enc e Computer Vision , 1998. [8] P .P erona and W. T. F reeman. A factorization ap- proac h to grouping. In H. Burk ardt and B. Neu- mann, editors, Pr c ECCV , pages 655{670, 1998. [9] S. Sark ar and K.L. Bo er. quan titativ e measures of c hange based on feature organization: eigen al- ues and eigen ectors. In Pr c. IEEE Conf. Com- puter Vision and Pattern R gnition , 1996. [10] G.L. Scott

and H. C. Longuet-Higgins. F eature grouping b y relo calisation of eigen ectors of the pro xmit y matrix. In Pr c. British Machine Vi- sion Confer enc , pages 103{108, 1990. [11] J. Shi and J. Malik. Normalized cuts and image segmen tation. In Pr c. IEEE Conf. Computer Vision and Pattern R gnition , pages 731{737, 1997. [12] J. Shi and J. Malik. Self inducing relational dis- tance and its application to image segmen tation. In Pr c. Eur op an Conf. Computer Vision , pages 538{543, 1998.