On Objective Conicts and Objective Reduction in Multiple Criteria Optimization Dimo Brockhoff and Eckart Zitzler brockhoffzitzlertik

On Objective Conicts and Objective Reduction in Multiple Criteria Optimization Dimo Brockhoff and Eckart Zitzler brockhoffzitzlertik - Description

eeethzch TIKReport No 243 Institut fr Technische Informatik und Kommunikationsnetze ETH Zrich Gloriastrasse 35 ETHZentrum CH8092 Zrich Switzerland February 2006 Abstract A common approach in multiobjective optimization is to perform the decision maki ID: 25870 Download Pdf

237K - views

On Objective Conicts and Objective Reduction in Multiple Criteria Optimization Dimo Brockhoff and Eckart Zitzler brockhoffzitzlertik

eeethzch TIKReport No 243 Institut fr Technische Informatik und Kommunikationsnetze ETH Zrich Gloriastrasse 35 ETHZentrum CH8092 Zrich Switzerland February 2006 Abstract A common approach in multiobjective optimization is to perform the decision maki

Similar presentations


Download Pdf

On Objective Conicts and Objective Reduction in Multiple Criteria Optimization Dimo Brockhoff and Eckart Zitzler brockhoffzitzlertik




Download Pdf - The PPT/PDF document "On Objective Conicts and Objective Reduc..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "On Objective Conicts and Objective Reduction in Multiple Criteria Optimization Dimo Brockhoff and Eckart Zitzler brockhoffzitzlertik"— Presentation transcript:


Page 1
On Objective Conflicts and Objective Reduction in Multiple Criteria Optimization Dimo Brockhoff and Eckart Zitzler {brockhoff,zitzler}@tik.ee.ethz.ch TIK-Report No. 243 Institut für Technische Informatik und Kommunikationsnetze ETH Zürich Gloriastrasse 35, ETH-Zentrum, CH–8092 Zürich, Switzerland February 2006 Abstract. A common approach in multiobjective optimization is to perform the decision making process after the search process: first, a search heuristic approximates the set of Pareto-optimal solutions, and then the decision ma ker chooses an appropriate

trade-off solution from the resulting approxima tion set. Both processes are strongly affected by the number of optimization criter ia. The more objectives are involved the more complex is the optimization problem an the choice for the decision maker. In this context, the question arises wheth er all objectives are actually necessary and whether some of the objective s may be omitted; this question in turn is closely linked to the fundamental issue of con- flicting and non-conflicting optimization criteria. Besides a general definition of conflicts between objective

sets, we here introduce the problem of compu ting a minimum subset of objectives without losing information (MOSS) and show that this is an NP -hard problem. Furthermore, we present for MOSS both an ap- proximation algorithm with optimum approximation ratio and an exact algorithm which works well for small input instances. The paper concludes with ex peri- mental results for random sets and the multiobjective 0/1-knapsack prob lem. 1 Motivation With the availability of sufficient computing resources, ge nerating methods for identify- ing or approximating the set of Pareto-optimal

solutions ha ve become increasingly pop- ular for tackling multiobjective optimization problems. T he advantage of these meth- ods is that the decision making process is postponed after th e optimization process: the decision maker can choose an appropriate trade-off solutio n from a set of alternative solutions generated by the corresponding search algorithm . However, the complexity of both processes is strongly affected by the number of objecti ves involved. On the one hand, the running time of generating methods may be exponent ial in the number of objectives as, e. g., for algorithms

based on the hypervolum e indicator [14, 5, 10], and on the other hand comparing even only a few alternative solut ions may become diffi- cult or infeasible for a human decision maker, if too many obj ectives are considered simultaneously. In the light of this discussion, the questi on arises whether it is possible
Page 2
to omit some of the objectives without changing the characte ristics of the underlying problem. Furthermore, one may ask under which conditions su ch an objective reduction is feasible and how a minimum set of objectives can be compute d. These questions have

gained only little attention in the lit erature so far. There are closely related research topics such as principal componen t analysis [4] and dimension theory [9], which have a different focus though. Transferre d to the multiobjective op- timization setting, the corresponding methods aim at deter mining a (minimum) set of arbitrary objective functions that preserves (most of) the problem ch aracteristics; how- ever, here we are interested in determining a minimum subset of original objectives that maintains the order on the search space. Furthermore, t here a few studies that in- vestigate

the relationships between objectives in terms of conflicting and nonconflicting optimization criteria. Deb [2] defines a set of objectives as conflicting, if there exists one solution that simultaneously achieves for each criterion t he optimal value; otherwise the set is nonconflicting. Tan, Khor, and Lee [8] presented a refin ement of this definition where a conflict denotes the existence of incomparable solutions in the search space. A similar notion of conflict has been suggested by Purshouse a nd Fleming [6] who con- sider

conflict as a binary relation between single objective s. However, these definitions are not sufficient to indicate whether objectives can be omit ted or not as the following example demonstrates; although all objectives are conflict ing according to [2, 6, 8], one of the three objectives can be removed while preserving the s earch space order. Example 1 Fig. 1 shows the parallel coordinates plot, cf. [6], of three solutions (solid line), (dotted) and (dashed) that are pairwise incomparable. Assuming that represent the en- values Fig. 1. Parallel coordinates plot for

three solu- tions and three objectives tire search space, the original objective set ,f ,f is conflicting according to [2, 8] and all objective pairs “exhibit evi- dence of conflict” as defined in [6]. Never- theless, the objective set ,f ,f con- tains redundant information: the objective can be omitted, and all solutions re- main incomparable to each other with re- gard to the objective set ,f This paper addresses two open issues: (i) deriving general c onditions under which certain objectives may omitted and (ii) computing a minimum subset of objectives needed to preserve

the problem structure. In particular, we propose a generalized notion of objective conflicts which co mprises the definitions of Deb [2], Tan et al. [8], and Purshouse and Fleming [6], specify on this basis a necessary and sufficient condition un der which objectives can be omitted, introduce the problem of minimum objective subsets ( MOSS ), show that MOSS is NP -hard, provide an approximation algorithm with optimum approxima tion ratio as well as an exact algorithm which has polynomial runtime in the decis ion space size, and Two solutions are incomparable iff either is

better than the other one in some o bjectives.
Page 3
validate our approach on both random problems and the 0/1-kn apsack problem by comparing the algorithms and investigating the influence of the number of objec- tives and the search space size. In addition, extensions of the proposed approach will be dis cussed in the last section. 2 A Notion of Objective Conflicts 2.1 The Relation Between Objectives and Orders A general optimization problem can be considered as a quadru ple X,Z,f,rel where denotes the search space or decision space, represents the objective space, is a

function that assigns to each solution or decision vecto a corre- sponding objective vector , and rel represents a partial order over . The goal is to find a solution that is mapped to a minimal element of ) := | regarding the partially ordered set Z,rel In the scenario considered in this paper, consists of one or several objective func- tions ,f ,... ,f that are all to be minimized where = ( ,... ,f for , and . Furthermore, we assume that rel is the relation on real vectors with which induces a corresponding preorder on with . The relation is also known as weak Pareto dominance, and we

say weakly dominates whenever ; other dominance relations such as epsilon dominance, cf. [14], co uld be taken as well, and the following discussions applies to any preorder on that is defined by a corresponding partial order on . The minimal elements of with respect to form the so-called Pareto front, and solutions that are mapped to ele ments of the Pareto front are denoted as Pareto-optimal and constitute the Pareto set. If there exist two incomparable Pareto-optimal solutions , i. e., neither weakly dominates the other one ( || ), then the cardinality of the Pareto front is greater

than . If two solutions are in- different, i. e., they are mapped to the same objective vecto r ( ), then the relation is only a preorder, but not a partial order on . However, we can define a partial order on the set X/ of equivalence classes regarding X, as follows: X/ : [ ] : 6 q. The remainder of this paper addresses the issue of finding a mi nimum subset of the objectives that induces the same preorder on the decision sp ace as the complete set of objectives. To this end, we here introduce a generalization of the weak Pareto domi- nance relation defined above: a decision

vector weakly dominates a decision vector w. r. t. the set F⊆{ ,f ,... ,f of objective functions (written as A relation rel is called a preorder iff it is reflexive and transitive; a preorder that is a ntisym- metric is denoted as partial order. We call a partial order total order or lin ear order if it is total; a preorder that is total is called total preorder. Given a partial ordered set Z, rel , an element with is called minimal element of iff for all holds: rel
Page 4
) iff ∈F . We will write if we mean the weak dominance relation w. r. t. ; in addition, we

define := for the case that is empty. The following theorem shows that for any objective function set the generalized weak Pareto dominance relation can be derived f rom the objective-wise less than or equal relation on Theorem 1. Let ,... ,f be a set of different objective functions. Then it holds: Proof: For all x,y x,y w. r. t. F ∈{ ,... ,k ∈{ ,... ,k w. r. t. ∈{ ,... ,k : ( x,y Note that the above equivalence also holds for the strict dom inance relation and the multiplicative -dominance relation, cf. [14], but does not apply to the regu lar Pareto dominance

relation defined as Finally, we will use a graphical notation for relations, cal led relation graphs. Given a certain ordered set Z,rel , the relation graph for Z,rel has a vertex per element in and a directed edge between the vertices and only if rel . For a partial ordered set, the relation graph can be reduced to a Hasse diagram, wit h an edge between vertices and iff is a lower cover of . The relation graph is only another description of a relation but helps us to visualize our ideas. Example 2 Let := A,B,C,D,E ,f = ( ,f be a multiobjective op- timization problem where is

specified by the objective values in the following table. Fig. 2 shows the relation graph of X, ,f and the relation graph and Hasse dia- gram for The solutions and are the minimal elements of X, ,f , i. e., the Pareto set, whereas and form the Pareto front, i. e., they are the minimal elements of with respect to and are the only incomparable and and the only indifferent decision vectors according to the relatio ,f 2.2 Partial Orders on Sets of Objectives In this section, we introduce a general concept of conflicts b etween sets of objectives. On the basis of the following

definitions, two algorithms to e xactly resp. approximately compute a minimum set of objectives, which induces the same p reorder on as the whole set of objectives, will be proposed in Sec. 3. We say is a lower cover of iff rel rel
Page 5
D E ) = ) = (a) (b) (c) Fig. 2. (a) Relation graph of X, ,f , (b) relation graph of , and (c) Hasse dia- gram of from Example 2. Definition 1 Let ⊆F be two sets of objectives. Then vF Definition 2 Let ⊆F be two sets of objectives. We call nonconflicting with iff vF ∧F vF weakly conflicting with iff vF

∧F 6vF vF ∧F 6vF strongly conflicting with iff 6vF ∧F 6vF By definition, is a preorder since is a preorder. Two sets of objectives are called nonconflicting if and only if the corresponding relat ions and are iden- tical but not necessarily ; in other words, and are indifferent w. r. t. . If ⊂F and is nonconflicting with we can simply omit all objectives in \F without influencing the preorder on . Furthermore, the term “strongly con- flicting” corresponds to incomparability w. r. t. , while “weakly conflicting” means neither

indifferent nor incomparable w. r. t. . These two types of conflicts are mutu- ally exclusive which is useful in the context of the followin g result. Theorem 2. Let be a set of objectives. Then is a total preorder on if and only if there are no strongly conflicting pairs ∈P Proof: By definition, it is clear that is always reflexive and transitive. Assume that there are no strongly conflicting pairs ∈P , i. e. 6∃F ∈P ) : 6vF ∧F 6vF ⇐⇒∀F ∈P ) : vF ∨F vF ⇐⇒v is total Thus, is total iff there are

no strongly conflicting pairs of object ive sets. Note that the above formulation of conflicting objectives ca n be regarded as a gen- eralization of Purshouse and Fleming’s definition [6] which only considers pairs of objectives; moreover, it also comprises the notions by Deb [ 2] and Tan et al. [8]. For a more detailed discussion of the connection to previous defin itions of objective conflicts, we refer to the appendix.
Page 6
2.3 Minimal, Minimum, and Redundant Objective Sets Based on the above conflict relations, we will now formalize t he

notion of redundant objective sets. Definition 3 Let be a set of objectives. An objective set ⊆F is denoted as minimal w. r. t. iff (i) is nonconflicting with , and (ii) there exists no 00 that is nonconflicting with minimum w. r. t. iff (i) is minimal w. r. t. , and (ii) there exists no 00 ⊂F with |F 00 |F that is minimal w. r. t. A minimal objective set is a subset of the original objective s that cannot be further re- duced without changing the associated preorder. A minimum o bjective set is the small- est possible set of original objectives that preserves the

o riginal order on the search space. By definition, every minimum objective set is minimal , but not all minimal sets are at the same time minimum. Definition 4 A set of objectives is called redundant if and only if there exists ⊂F that is minimal w. r. t. This definition of redundancy represents a necessary and suf ficient condition for the omission of objectives. 3 The Minimum Objective Subset Problem Given a multiobjective optimization problem with the set of objectives, the ques- tion arises whether objectives can be omitted without chang ing the order on the

search space. If an objective subset ⊆F can be computed and holds for all so- lutions if and only if , we can omit all objectives in F\F while preserving the preorder on . Concerning the last section, we are interested in identi- fying a minimum objective subset with respect to , yielding a slighter representation of the same multiobjective optimization problem. Formally , this problem can be stated as follows. Definition 5 The search problem MINIMUM OBJECTIVE SUBSET MOSS ) is de- fined as follows. Given: A multiobjective optimization problem X,Z, ,... ,f Instance: The set of

solutions, the generalized weak Pareto dominance relati on and for all objective functions ∈F the single relations where Task: Compute an index ⊆{ ,... ,k of minimum size with Note that the limitation of the instances to the whole search space description is not essential here. One can think of situations where the underl ying set is the Pareto set or an approximation of it. The restriction to the partial order and its corresponding preorder is not essential as well, but instead of any partially ordere d set Z,rel we consider
Page 7
only here. Note that we are not

interested in a minimal objective s ubset but in a minimum objective set w. r. t. the set of all objectives. T he approach of finding a minimum objective subset is related to dimension theory [9] . Given a partial order rel the dimension of rel is defined as the minimum number of linear extensions of rel , the intersection of which is rel . A set of linear extensions the intersection of which is rel is called a realizer for rel . The main difference between the computation of the dimensi on of a partial order and our approach of finding the size of a mini mum objective subset

w. r. t. the set of all objectives is the fact, that the corresp onding realizer contains linear extensions which do not bear relation to the relations . Instead in a realizer for the partial order , we are interested in a set of given relations the intersection of which is . For simplification, let us assume that there are no indiffer ent solutions, i. e., is a partial order. The dimension of gives us only a lower bound for the size of a minimum subset of objectives w. r. t. . For example, the dimension of is always 2 if all decision vectors are incomparable, but the size of the minimum

objective set can be greater than 2. Instead of the computation of a minimum reali zer in dimension theory, which is NP -hard [11], we are interested in a shorter description of our problem with a selection of the given objectives, the complexity of which will emerge as NP -hard, too, in the next section. 3.1 Proof of NP -hardness That MOSS is a set problem does not directly arise from the definition of the MOSS problem but, obviously, the relations in Def. 5 as well as are subsets of Considering the complementary sets := ( \ for any ⊆F and De Morgan’s laws, the task of the MOSS

problem can be restated as finding a minimum in- dex such that . Hence, the NP -hard problem SET COVER introduced in [5] is closely related to the MOSS problem. Definition 6 We define the search problem SET COVER , or SCP for short, as follows. Instance: A Collection ,... ,C of subsets of a finite set ,... ,m Task: Compute an index ⊆{ ,... ,k of minimum size with The set in an SCP instance complies with the relation in a MOSS instance just as each subset corresponds to the relation . Just as the ’s are subsets of the ’s are supersets of , i. e., the complementary

relations are subsets of Nevertheless, SCP and MOSS are not identical problems due to the fact that the allowed instances for MOSS have to ensure that the relations correspond to preorders on whereas for SCP , instances with arbitrary sets are allowed. More precisely , the relations in an allowed MOSS instance are always linear orders, written as ,... , with , augmented with additional relations between indifferent solution pairs, thus, the relations are preorders, cf. Fig. 3 for an example. Because of the simil arity between SCP and MOSS it is not surprising that also MOSS is NP -hard. In the

following we use a Turing reduction SCP MOSS to prove the NP -hardness of MOSS A linear extension of a relation rel is a linear order on , containing rel
Page 8
Theorem 3. The problem MOSS is NP -hard. Sketch of Proof: To simplify the notations below, we denote the input size of MOSS by , where denotes the number of objectives, and := . For the NP -hardness proof, a Turing reduction SCP MOSS is required. Due to space limitations, we only provide a sketch of the transformation and refer for the correctness proof of this transformation to the appendix. For a small ins tance, Fig. 3

visualizes the basic idea of the transformation. Starting from an SCP instance, consisting of the set ,... , s and the subsets with , all relations as well as in the MOSS instance are defined as subsets of with := ,... , ,... , . According to the similarity of the two problems, each set in the SCP instance has its counter- part in the generated MOSS instance. The relation corresponds to the set and is the reflexive closure of the antichain on , i. e., only contains the elements and for . For each subset of with we create the relation in the MOSS instance. The relation includes the

linear or- der ,... , and additionally, the relation contains the element iff 6 . In addition to the relations , we compute the relation +1 which is the reverse linear order ,... , . After this trans- formation, we question our MOSS oracle once. The resulting index SCP for the SCP problem will be then SCP := oracle \{ + 1 if the oracle produces oracle as its output. The whole transformation takes time km and produces an MOSS instance of size km 3.2 An Approximation Algorithm As the computation of a minimum objective subset of objectiv es is NP -hard, we cannot expect to find an exact

deterministic algorithm for the probl em with polynomial running time, unless NP . Instead, we present an approximation algorithm with polyn o- mial running time in the following; an exact algorithm will b e proposed in Sec. 3.3. With Algorithm 1, we propose a greedy strategy for the MOSS problem. For SCP , an approximation algorithm with a similar greedy strategy is a lready known the approxi- mation ratio of which is ln ln ln (1) where is the number of elements in the set [7]. This knowledge is useful for proving the following resu lt on Algorithm 1. Theorem 4. Algorithm 1 is an

approximation algorithm for the MOSS problem with approximation ratio (log and needs time ) = Proof: First, we show that Algorithm 1 always computes a correct sol ution for the MOSS problem, i. e., an index with . By construction, Algorithm 1 provides always an index with , i. e., . As , and thus holds, the equivalence is always true. To show the upper bound on the approximation ratio, we sketch the proof of a Tur- ing reduction MOSS SCP and refer to the appendix for the correctness proof. Given The reflexive closure of an antichain is simply a relation with only reflexive edges in

their graph representation.
Page 9
Transformation a, c, d b, c a, b a, b, c, d Fig. 3. An example for the Turing reduction from SCP to MOSS . The reflexive and transitive edges are omitted for clarity. an instance for MOSS , consisting of the relations and with , we can compute an SCP instance as follows. The set in the SCP instance contains an element x,y for each . A subset of in the SCP instance contains an element x,y iff 6 . The output for the MOSS problem, is the index , computed by the SCP oracle. The Turing reduction needs time and pro- duces an SCP instance of size .

Since Algorithm 1 uses this transformation and then acts like the greedy algorithm for SCP , the upper bound (log for the approx- imation ratio of the greedy algorithm for SCP is directly translated to Algorithm 1. For proving that Algorithm 1 has an approximation ratio of (log , we use con- clusions made for SCP . Feige showed in [3], that there is no ε > such that an ap- proximation algorithm can solve SCP with approximation ratio (1 )ln , unless NP TIME (log log . With our transformation from SCP to MOSS , Feige’s lower bound for SCP yields to a lower bound of (log 2 ) = (log for MOSS

This is due to the fact that in the transformation from SCP to MOSS the size of the set is transformed into the set of size . Assuming, that there is a polynomial approximation algorithm for MOSS with an approximation ratio of (log , we get a contradiction to Feige’s results, because we can transform each SCP instance in poly- nomial time into a MOSS instance with of size and solve SCP via the (log algorithm for MOSS The worst-case running time of Algorithm 1 is ) = : The computation of the complementary relations during initialization need s time and the total runtime—amortized over all loop

cycles—is for the update of the ’s, and respectively, together with the computation of . Furthermore, each
Page 10
Algorithm 1 A greedy algorithm for MOSS Init: := where := ( \ := while do choose an , . . . , k }\ such that | is maximal := \ := ∪{ end while of the steps of the while loop costs additionally time for the calculation of the maximum and the update of 3.3 An Exact Algorithm In this section, we present an exact algorithm for the MOSS problem, the running time of which is polynomial in the size of but exponential in the number of objectives. In order to solve the MOSS

problem exactly it is in general not sufficient to take inform a- tion about conflicts between pairs of objectives into accoun t. Example 1 shows a simple instance with three objectives. Even though all pairs of obj ective functions are strongly conflicting according to Def. 2, the whole set of objectives i s redundant, i. e., can be omitted. Almost the same situation emerges, if we want to s olve the MOSS problem with the help of information about conflicts between pairs of sets with larger but con- stant size. The observation that there is no possibility for a correct

predication whether a set of objectives is redundant, by observing only relation s between objective subsets of constant size, can be likewise derived from the NP -hardness of the MOSS prob- lem. Thus, we are forced to examine the type of conflict betwee n all possible objective subsets if we want to solve the MOSS problem exactly. Algorithm 2 examines all possible objective subset pairs ∈P in com- bination with all solution pairs separately by calculating the set xy of all minimal objective subsets w. r. t. explaining the relation between and w. r. t. The set of objective subsets

always contains all minimal subsets as solutions for the MOSS problem restricted to the solution pairs considered so far. is updated whenever a new solution pair is observed. To simplify the notation, we use the symbol for a union of two sets ,S ⊆P containing themselves objective subsets. contains the pairwise union of sets and only if there is no subset of in := 6 ,p When all solution pairs are processed, contains all minimal objective subsets w. r. t. from which Algorithm 2 chooses a minimum one as an exact solut ion for the MOSS problem. With we denote the power set of := , . . . , f


Page 11
Algorithm 2 An exact algorithm for MOSS Init: := for each pair of solutions do := {{ }| ∈{ , . . . , k } 6 := {{ }| ∈{ , . . . , k } 6 xy := if xy then xy := , . . . , k := xy end for Output: smallest set min in Theorem 5. Algorithm 2 solves the MOSS problem exactly in time Proof: For a correctness proof, we have to ensure that Algorithm 2 co mputes the sets in xy correctly. Then, the invariant, that contains all minimal sets of objectives which explain the relationships between all considered pairs of s olutions, is always correct. The sets are always minimal,

because we delete all supersets during the := xy command. For the first pair of solutions, xy is computed correctly and the invariant holds as a result of induction. We now distinguish between the three possible relationships between solution pairs and show for each type that our algorithm computes xy correctly. (i) In the case of an indifferent solution pair , i.,e., ∈F ) = , both and are empty sets, yielding to xy ,... ,k . Because indifferent vectors have the same objective vector, each single objective is a possible minimal set which explain the indifference. (ii) I f we

consider comparable solutions, without loss of generality , i. e., ∈F ∈F < f , Algorithm 2 computes and therefore xy contains by definition only single objectives , where < f Thus, xy contains all objective sets, which explain the relationshi w. r. t. . (iii) For an incomparable solution pair || , no ∈F will be both in and in . Thus, xy contains only sets of objectives i,j with cardinality 2 which matches the minimal size of xy if || and for which < f > f The computation of and can be done in time and the calculation of xy is possible in time , as xy contains only xy

|≤| || | sets. Since we know that is a subset of ,... ,k contains at most sets each of size . Hence, the computation of xy needs time . Due to the fact that Algorithm 2 computes the sets for each pair of individua ls, the whole running time results in As the last aspect of our theoretical analysis, we present an instance for MOSS , for which the exact algorithm needs time k/ Theorem 6. The worst-case running time of Algorithm 2 for the MOSS problem is k/ Proof: Fig. 4 shows the idea of an instance for which Algorithm 2 needs time k/ . Let us assume that consists of an even number

of solutions :=
Page 12
,... , together with the relation and = 3 relations correspond- ing to the objective functions := ,... ,f where only the solutions and for m/ are incomparable. The incomparability of such pairs is only caused by their th, (3 + 1) th, and (3 + 2) th objective values, i. e., we need either the objective pair ,f or the pair ,f to describe the incomparability, cf. Fig. 4. Thus, whenever Algorithm 2 considers a new pair of incompa- rable solutions, the size of the set reduplicates. Because we have m/ 2 = k/ of those incomparable pairs, is of size k/ after the

algorithm considered all of the k/ incomparable pairs. This is possible after the first k/ of altogether steps of the al- gorithm, which results in a running time of at least k/ 3) k/ k/ In addition, this restricted example can be easily extended to the case m > k objective values objectives Fig. 4. The parallel coordinates plot of an instance for which the exact algorithm needs time k/ 4 Experiments The following experiments serve two goals: (i) to investiga te the size of a minimum objective subset depending on the size of the search space an d the number of original objective

functions, and (ii) to compare the approximative and the exact algorithm with respect to the size of the generated objective subsets and th e corresponding running times. Both issues have been considered both for a random pro blem and the multiob- jective 0/1-knapsack problem. 4.1 Random Problem In a first experiment we generated the objective values for a s et of solutions at ran- dom where the objective values were chosen uniformly distri buted in [0 1] . For
Page 13
2 4 6 8 10 12 14 2 4 6 8 10 12 14 16 number of objectives needed number of objectives 25 solutions 50

solutions 100 solutions 150 solutions 200 solutions Fig. 5. Random model: The size of a minimum subset plotted against the number of objectives in the problem formulation. each combination of search space size and number of objectives , 100 independent random samples were considered. The results for Algorithm 2 are shown in Figure 5. For different sizes of the search space, the number min of objectives in a minimum objective subset is plotted against the number of objectives used in the problem for- mulation. Two main observations can be made. First, the mini mum number of objectives decreases

the more objectives are involved as the fraction min /k decreases with rising number of objectives in the problem formulation. Second, the large r the search space the more objectives are in a minimum objective set. Although there is no possibility to determine the course of the curves for arbitrary large num ber of objectives with experiments, the question how min will behave with increasing to infinity, arises. We expect lim min =2 because the probability that an existing objective pair occ urs, the intersection of which fits the preorder on , increases with higher Concerning the

comparison of the two algorithms, Fig. 6 reve als that the greedy algorithm yields similar sizes of the computed sets in compa rison to the exact algorithm but is much faster than the latter. Already for a small search space of 32 solutions, the exact algorithm is only usable for smaller than 15, whereas the running time of the greedy algorithm is competitive even for 50 objectives. 4.2 Knapsack Problem We did further experiments on the 0/1-knapsack problem [13] with 10 items, the im- plementation was taken from the PISA package [1]. Instead of examining the whole
Page 14
number of

objectives in problem formulation 10 15 20 25 30 exact algorithm: size of computed objective subset 13 16 13 greedy algorithm: size of computed objective subset 13 16 14 exact algorithm: running time in milliseconds 196 2,271 87,113 90,524 10 15 10 greedy algorithm running time in milliseconds 47 46 67 88 78 87 Table 1. The number of objectives in the computed subsets and the runtimes for an approximation of the Pareto Front, generated with SPEA2 after 1000 generations for the knapsack problem. The running times correspond to experiments on a linux computer (SunFireV6 0x with 3060 Mhz). search

space as in the random example, we generated an approx imation of the Pareto set with a multiobjective evolutionary algorithm, namely S PEA2 [12] with the standard settings (population size = 50 , offspring population size = 50 10 1000 generations). Both the exact and the approximation alg orithm were applied to the generated Pareto set approximation. In addition, we record ed the running times of both algorithms. Table 1 shows the results for different sizes of the objective space. The experiments show that the omission of objectives withou t information loss is possible even for a structured

problem as the 0/1-knapsack p roblem. In comparison to the exact algorithm, the greedy algorithm shows nearly the s ame output quality for the used knapsack instances regarding the size of the computed o bjective set but is much faster. Due to the sizes of the computed subsets which are—in a ll of our experiments less than one objective away from the optimum, the greedy alg orithm seems to be ap- plicable for more complex problems, particularly by virtue of its small running time. 5 Discussion This paper has investigated the minimum objective subset pr oblem ( MOSS ) that asks which objective

functions are essential for a given multiob jective optimization prob- lem. To this end, we have introduced a general notion of confli cts between objective sets and showed that the answer to the above question can gene rally not be deduced from the information about conflicts between single objecti ves or objective sets of a predefined limited size. The latter observation motivates w hy MOSS turns out to be NP- hard. Furthermore, we have proposed an exact algorithm for MOSS , the running time of which is polynomial in the size of the decision space but exponential in the

number of objectives, and a polynomial greedy algorithm with an opt imal approximation ratio of (log From a practical point of view, the present study provides a rst step towards di- mensionality reduction of the objective space in multiple c riteria optimization scenar- ios. The proposed algorithms can be particularly useful to a nalyze Pareto sets or Pareto set approximations generated by exact resp. heuristic sear ch procedures, but it is clear that an analysis of the entire search space is infeasible for most problems. Therefore, an important issue is the conflict analysis if only

partial info rmation about the search space
Page 15
is available as, e. g., during the optimization process. Fur thermore, the experimental re- sults for random objective functions as well as for the knaps ack problem have revealed that a high percentage of objective can be omitted, especial ly if the number of objectives is high ( 10 or more). However, one may also be interested in a substantia l reduction of the objective set in the case of few objectives; here, a modi ed MOSS problem where the search space order needs to be preserved only partially w ould be of high practical

relevance. References 1. Stefan Bleuler, Marco Laumanns, Lothar Thiele, and Eckart Zitzler. PISA — a platform and programming language independent interface for search algorithms. In EMO 2003 Proceed- ings , pages 494–508. Springer, Berlin, 2003. 2. Kalyanmoy Deb. Multi-objective optimization using evolutionary algorithms . Wiley, Chich- ester, UK, 2001. 3. Uriel Feige. A threshold of ln n for approximating set cover. J. ACM , 45(4):634–652, 1998. 4. Ian T. Jolliffe. Principal component analysis . Springer, 2002. 5. Joshua Knowles and David Corne. Properties of an adaptive archi ving algorithm

for storing nondominated vectors. IEEE Transactions on Evolutionary Computation , 7(2):100–116, 2003. 6. Robin C. Purshouse and Peter J. Fleming. Conflict, harmony, and ind ependence: Relation- ships in evolutionary multi-criterion optimisation. In EMO 2003 Proceedings , pages 16–30. Springer, Berlin, 2003. 7. Petr Slavík. A tight analysis of the greedy algorithm for set cover. In STOC ’96: Proceedings of the twenty-eighth annual ACM symposium on Theory of computing , pages 435–441, New York, NY, USA, 1996. ACM Press. 8. Kay Chen Tan, Eik Fun Khor, and Tong Heng Lee. Multiobjective

Evolutionary Algorithms and Applications . Springer, London, 2005. 9. William T. Trotter. Combinatorics and Partially Ordered Sets: Dimension Theory . The Johns Hopkins University Press, Baltimore and London, 1992. 10. Lyndon While. A new analysis of the lebmeasure algorithm for calculatin g hypervolume. In EMO 2005 Proceedings , pages 326–340. Springer, 2005. 11. Mihalis Yannakakis. The complexity of the partial order dimension pro blem. SIAM Journal on Algebraic and Discrete Methods, Vol. 3, No. 3, September 1982 , pages 351–358, 1982. 12. Eckart Zitzler, Marco Laumanns, and Lothar Thiele.

SPEA2: Impr oving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In K.C. Giannakoglou et al., edi- tors, Evolutionary Methods for Design, Optimisation and Control with Application to In dus- trial Problems (EUROGEN 2001) , pages 95–100. International Center for Numerical Meth- ods in Engineering (CIMNE), 2002. 13. Eckart Zitzler and Lothar Thiele. Multiobjective Evolutionary Algorithms : A Comparative Case Study and the Strength Pareto Approach. IEEE Transactions on Evolutionary Compu- tation , 3(4):257–271, 1999. 14. Eckart Zitzler, Lothar Thiele, Marco Laumanns,

Carlos M. Fonese ca, and Viviane Grunert da Fonseca. Performance assessment of multiobjective optimizers: A n analysis and review. IEEE Transactions on Evolutionary Computation , 7(2):117–132, 2003.
Page 16
10 100 1000 10000 100000 1e+06 1e+07 1e+08 1e+09 0 5 10 15 20 25 30 35 40 45 50 run time in milliseconds for 100 runs number of objectives greedy exact (a) comparison of run times 1 10 0 5 10 15 20 25 30 35 40 45 50 number of objectives needed number of objectives greedy exact (b) comparison of output quality Fig. 6. Comparison between the greedy and the exact algorithm for the

random p roblem and 32 solutions. Note that the plot of the running times in a) is a logscale plot and only the summed running times over 100 runs on a linux computer (SunFireV60x with 3060 M Hz) are shown. Figure b) shows the sizes of the computed minimum / minimal sets averaged over 100 runs.
Page 17
A Proofs of NP -hardness Here, we additionally provide the proofs omitted in Sec. 3. Theorem 3. The problem MOSS is NP -hard. Proof: First, we denote the input size of MOSS by , where with := . We refer to Fig. 3 for a visualization of the ideas behind the Turing transformation SCP

MOSS , which we recapitulate first. Starting from the SCP instance consisting of the set ,... ,s and the subsets with , all relations as well as in the MOSS instance are defined on the basic set := ,... , ,... , . The relation will be the reflexive closure of the antichain on , i. e., only contains the elements and for . The relations with are all constructed in the same way. They include the linear order ,... , as well as the reflexive relations. Additionally, relation contains the element iff 6 In addition, we have to compute another relation +1 which is the reverse

linear order ,... , . After this transformation, we question our MOSS oracle once. The resulting index SCP for the SCP problem will be then SCP := oracle + 1 if the oracle produces oracle as its output. It remains to show that the transformation yields to an exact algorithm for SCP with polynomial running time, under the assumption that the re is an exact polynomial time algorithm for MOSS . Let us assume that ,... ,s ,C ,... ,C is the SCP instance with ,... ,c } . Via the described transformation and the hypothetical algorithm , we can compute the index SCP := \{ +1 as the output

corresponding to the SCP instance . Obviously, the computation of SCP is possible in polynomial time using a polynomial algorithm for MOSS . To complete the proof, we still have to show (i) why always + 1 , (ii) why \{ + 1 is a correct output for our SCP instance, and (iii) why the computed index \{ + 1 is minimum. First, we will take a look at the question (i) why always + 1 for an exact MOSS algorithm , i. e., why +1 is always needed to yield as the intersection of some . Because in no pair with is comparable, for each pair , there has to be at least one where 6 and at least one with 6 .

Considering a pair , for all with ∈{ ,... ,k holds. By construction, only 6 +1 . Consequently, +1 is always needed, to construct as the intersection of single ’s. Now we show (ii) why := \{ +1 is always a correct output for the given SCP instance. As we have seen before, + 1 and therefore, the intersection of the ’s does not contain any pairs and with ν < and no pairs with . The construction of the relations with ∈{ ,... ,k results in the absence of pairs and with  < in the intersection if there will be at least one with . There only remains the possibility of pairs

with in the intersection. To avoid this, for each ∈{ ,... ,m there must be at least one ∈{ ,... ,k in with 6 . By construction of the Turing transformation, this can only occur, if . Thus, \{ +1 ,... ,m . Last, we have to show (iii) why the computed index \{ + 1 is a minimum index for SCP . Assume
Page 18
that \{ + 1 is not a minimum index for SCP , i. e., there is a smaller index with and . As one can easily see from the above transformation, ∪{ + 1 would be a smaller index for MOSS than Theorem 7. The MOSS problem is Turing reducable to SCP Proof: Given an

instance for MOSS , consisting of the relations and with , a polynomial time algorithm can compute an SCP instance as follows. The set in the SCP instance contains one element x,y for each 6 . A subset of in the SCP instance contains an element iff . The algorithm can then use a hypothetical polynomial time bounded exact algorithm for SCP , to compute the index as an output for the MOSS problem. The index , computed by the SCP algorithm, is always a correct output for the MOSS problem. To see that, we show , first. Let x,y for any and any . By definition, , i. e., )) > f holds. But

then , thus, x,y by definition. Now, we are able to show that is always a correct output for the MOSS problem. We only have to use the rules of deMorgan and the fact that holds for all x,y x,y : [( x,y x,y : [( )) )] By construction, it is clear that a minimum is always a minimum index for MOSS B Relations between the different definitions of conflict Before we present the relations between the different conce pts of conflict, mentioned in Sec. 1, we restate the definitions of conflict according to t he notation in Sec. 2 and prove a lemma we use later.

Definition 7 (Conflict by Deb [2]) A multiobjective optimization problem X,Z,f,rel contains conflicting objectives if and only if there are trad e-offs, i. e., the partially or- dered set ,rel has no unique minimal element. Definition 8 (Conflict by Tan et al. [8]) A set of objective functions is said to be nonconflicting according to the weak dominance relation if and only if there are no incomparable solution pairs, i. e., Instead of , the dominance relation is used in the original definition in [8].
Page 19
Definition 9

(Conflict by Purshouse and Fleming [6]) Two objectives and are conflicting if there exists at least one solution pair with < f > f . If < f > f holds for all pairs, and are totally conflicting. There is no conflict between and if no such pair exist. Lemma 1. For any set of objectives , there is no subset ⊆F which is strongly conflicting with according to Def. 2. Proof: With Theorem 1 it is clear that and therefore ∀F ⊆F holds for all . For this reason it is impossible that 6 ⇐⇒F6vF i. e., cannot strongly conflicting with

according to Def. 2. B.1 The relation to Deb’s definition of conflict [2] Theorem 8. If a multiobjective optimization problem X,Z,f := ( ,... ,f contains conflicting objectives according to Def. 7 it is pos sible that there is an ob- jective set ⊂F := ,... ,f which is nonconflicting or weakly conflicting with but no which is strongly conflicting with . The same holds if the optimization problem contains no conflicts according to Def. 7 Proof: Due to the fact that Def. 7 defines a conflict globally and only d epending on the small set

of minimal elements of the dominance relation, there is only weak re- lation between Def. 7 and our definition of conflict in Def. 2. G iven a multiobjective optimization problem X,Z,f := ( ,... ,f with := ,... ,f , we know from Lemma 1 that there is no ⊆F which is strongly conflicting with . Fig. 7 shows for the case of a conflicting problem (a) and for a noncon flicting problem (b) that subsets ⊆F can be either nonconflicting or weakly conflicting with Theorem 9. If all subsets ⊆F are nonconflicting with w. r. t. Def. 2,

contains no conflicting objectives according to Def. 7. Proof: If all subsets ⊆F := ,... ,f of a multiobjective optimization prob- lem X,Z,f := ( ,... ,f are nonconflicting with according to Def. 2, cannot contain incomparable solutions w. r. t. . Otherwise the relations corre- sponding to single objective functions cannot be nonconflic ting with , because the ’s are always total preorders, i. e., all solution pairs are c omparable w. r. t. each B.2 The relation to the conflict definitions of Tan, Khor, and L ee [8] Theorem 10. If a set of objective

functions is not conflicting according to Def. 8 it is possible that a subset ⊆F is nonconflicting with or weakly conflicting with according to Def. 2.
Page 20
values values (a) (b) Fig. 7. Parallel coordinates plots of two multiobjective optimization problems with three objec- tives := , f , f which contain (a) a conflict and (b) no conflict according to Def. 7. The multiobjective optimization problem in (a) contains only two solutions and the pro blem in (b) three, where the dotted solution is the unique minimal element of . Independant of Def.

7, there are subsets 00 ⊆F which are both weakly conflicting with := ) and nonconflicting with 00 := , f ). values a b (a) (b) Fig. 8. (a) Parallel coordinates plot for an example with three solutions (solid line), (dashed), and (dotted) and two objectives := , f with no conflict according to Def. 8. is nonconflicting with whereas is weakly conflicting with . (b) shows the corresponding relation graphs of the involved relations ⊆F
Page 21
Proof: Starting from a set of objective functions which is not conflicting according to Def. 8,

conclusions about the type of conflict (weak conflic t or no conflict) between subsets of ⊆F and itself are impossible. Fig. 8 shows that for an objective set it is possible to have both a subset ⊆F which is nonconflicting with and a subset 00 ⊆F which is weakly conflicting with Theorem 11. If all subsets ⊆F are nonconflicting with according to Def. 2, is nonconflicting according to Def. 8. Proof: Given a multiobjective optimization problem X,Z,f := ( ,... ,f where all subsets ⊆F := ... ,f are nonconflicting with

acoording to Def. 2. Then, there cannot be incomparable solutions with respect to i. e., is nonconflicting according to Def. 8 as at least one set will be strongly conflicting with , because two solutions and are always comparable with respect to each and B.3 The relation to the definitions of conflict by Purshouse an d Fleming [6] Theorem 12. Between the two objectives and is no conflict according to Def. 9 if and only if and are nonconflicting according to Def. 2 Proof: Let there be no conflict between the two objectives and according to Def. 9, i.

e., 6 : ( < f )) > f )) : [( )) ))] : [( )] : [( which is the same than and are nonconflicting according to Def. 2. Theorem 13. Two objectives and are in conflict according to Def. 9 if and only if and are either strongly conflicting or weakly conflicting accord ing to Def. 2. Proof: By definition, and are in conflict according to Def. 9 if and only if x,y : [ < f > f )] 6 x,y : [ < f > f )]) which is, by Theorem 12, the same as and are nonconflicting according to Def. 2 Because the different kinds of conflict in Def. 2 are disjoint , this is the

same as and are either weakly conflicting or strongly conflicting.