RealCoded Memetic Algorithms with Crossover HillClimbing Manuel Lozano lozanodecsai

RealCoded Memetic Algorithms with Crossover HillClimbing Manuel Lozano lozanodecsai - Description

ugr es Dept of Computer Science and AI University of Granada 18071 Granada Spain Francisco Herrera herr eradecsaiugr es Dept of Computer Science and AI University of Granada 18071 Granada Spain Natalio Krasnogor NatalioKrasnogornottinghamacuk Automat ID: 28878 Download Pdf

74K - views

RealCoded Memetic Algorithms with Crossover HillClimbing Manuel Lozano lozanodecsai

ugr es Dept of Computer Science and AI University of Granada 18071 Granada Spain Francisco Herrera herr eradecsaiugr es Dept of Computer Science and AI University of Granada 18071 Granada Spain Natalio Krasnogor NatalioKrasnogornottinghamacuk Automat

Similar presentations


Tags : ugr Dept Computer
Download Pdf

RealCoded Memetic Algorithms with Crossover HillClimbing Manuel Lozano lozanodecsai




Download Pdf - The PPT/PDF document "RealCoded Memetic Algorithms with Crosso..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "RealCoded Memetic Algorithms with Crossover HillClimbing Manuel Lozano lozanodecsai"β€” Presentation transcript:


Page 1
Real-Coded Memetic Algorithms with Crossover Hill-Climbing Manuel Lozano lozano@decsai.ugr .es Dept. of Computer Science and A.I. University of Granada, 18071 Granada, Spain Francisco Herrera herr era@decsai.ugr .es Dept. of Computer Science and A.I. University of Granada, 18071 Granada, Spain Natalio Krasnogor Natalio.Krasnogor@nottingham.ac.uk Automatic Scheduling, Optimisation and Planning Gr oup, School of Computer Science and IT Jubilee Campus, University of Nottingham, Nottingham, NG8 1BB, United Kingdom, http://www .cs.nott.ac.uk/˜nxk/ Daniel Molina dmolina@pr ogramador

.com Dept. of Computer Science and A.I., University of Granada, 18071 Granada, Spain Abstract This paper pr esents eal-coded memetic algorithm that applies cr ossover hill- climbing to solutions pr oduced by the genetic operators. On the one hand, the memetic algorithm pr ovides global sear ch (r eliability) by means of the pr omotion of high lev- els of population diversity On the other the cr ossover hill-climbing exploits the self- adaptive capacity of eal-parameter cr ossover operators with the aim of pr oducing an ef fective local tuning on the solutions (accuracy). An important aspect of

the memetic algorithm pr oposed is that it adaptively assigns dif fer ent local sear ch pr obabilities to individuals. It was observed that the algorithm adjusts the global/local sear ch balance accor ding to the particularities of each pr oblem instance. Experimental esults show that, for wide range of pr oblems, the method we pr opose her consistently outper forms other eal-coded memetic algorithms which appear ed in the literatur e. Keywords Memetic algorithms, eal-coding, steady-stated genetic algorithms, cr ossover hill- climbing. Introduction It is now well established that pur genetic

algorithms (GAs) ar not well suited to ne tuning sear ch in complex sear ch spaces, and that hybridisation with other tech- niques can gr eatly impr ove the ef ciency of sear ch (Davis, 1991; Goldber and oess- ner 1999). GAs that have been hybridized with local sear ch techniques (LS) ar often called memetic algorithms (MAs) (Moscato, 1989; Moscato, 1999). MAs ar evolutionary algorithms that apply separate LS pr ocess to ene individuals (e.g., impr ove their tness by hill-climbing). An important aspect concerning MAs is the trade-of between the exploration

abilities of the GA, and the exploitation abilities of the LS used (Krasno- gor and Smith, 2001). Under the initial formulation of GAs, the sear ch space solutions ar coded using the binary alphabet, however other coding types, such as eal-coding, have also been 2004 by the Massachusetts Institute of echnology Evolutionary Computation 12(3): 273-302
Page 2
M. Lozano, Herr era, N. Krasnogor D. Molina taken into account to deal with the epr esentation of the pr oblem. The eal-coding ap- pr oach seems particularly natural when tackling optimisation pr oblems of parameters with variables

in continuous domains. chr omosome is vector of oating point num- bers whose size is kept the same as the length of the vector which is the solution to the pr oblem. GAs based on eal number epr esentation ar called eal-coded GAs (RCGAs) (Deb, 2001; Herr era, Lozano and er degay 1998). For function optimisation pr oblems in continuous sear ch spaces, an important dif- culty must be addr essed: solutions of high pr ecision must be obtained by the solvers (Kita, 2001). Adapted genetic operators for RCGAs have been pr esented to deal with this pr oblem, which favour the local

tuning of the solutions. An example is non- uniform mutation (Michalewicz, 1992), which decr eases the str ength in which eal- coded genes ar mutated as the RCGA 's execution advances. This pr operty causes this operator to make uniform sear ch in the initial space and very locally at later stage. In addition, eal-coded MAs (RCMAs) have been pr oposed, which incorporate LS mechanisms for ef ciently ening solutions. Most common RCMA instances use local impr ovement pr ocedur es (LIPs), like gradient descent or random hill-climbing, which can only nd local optima. One

commonly used formulation of MAs applies LIPs to members of the population after ecombination and mutation, with the aim of exploit- ing the best sear ch egions gather ed during the global sampling done by the RCGAs. One RCMA model that has eceived attention concerns the use of cr ossover -based local sear ch algorithms (XLS). Since the cr ossover operator pr oduces childr en ar ound the par ents, it may be used as move operator for an LS method (Deb, Anand and Joshi, 2002; Dietzfelbinger Naudts, an Hoyweghen and egener 2003; Satoh, amamura and Kobayashi, 1996; ang and Kao, 2000). This is

particularly attractive for eal-coding since ther ar eal-parameter cr ossover operators that have self-adaptive natur in that they can generate of fspring adaptively accor ding to the distribution of par ents without any adaptive parameter (Beyer and Deb, 2001, Kita, 2000). ith the passing of generations, the RCMA loses diversity which allows the cr ossover to cr eate of fspring distributed densely ar ound the par ents, inducing an ef fective local tuning. This kind of cr ossover operator shows pr omise for building ef fective XLS. In this paper we pr esent an RCMA model that uses

eal-parameter cr ossover hill-climbing (XHC). XHC is particular type of XLS that allows the self-adaptive ca- pacity of eal-parameter cr ossover operators to be exploited inside the pr oper XLS, i.e., it is self-adaptive cr ossover local sear ch method. The mission of XHC is to obtain the best possible accuracy levels to lead the population towar the most pr omising sear ch ar eas, pr oducing an ef fective enement on them. On the other hand, the RCMA is designed to pr omote high population diversity levels. It attempts to induce eliability in the sear ch pr ocess by ensuring that dif

fer ent pr omising sear ch zones ar the focus of the XHC thr oughout the un. In addition, the RCMA employs an adaptive mechanism that determines the pr obability with which every solution should eceive the applica- tion of XHC. In this way it attempts to adjust the global/local sear ch ratio (i.e., the exploration/exploitation balance) to the particular featur es of the pr oblem that is being solved. The paper is set up as follows. In Section 2, we eview some important aspects of eal-parameter cr ossover operators and describe the one used in this work. In Section 3, we deal with RCMAs, pr

oviding classication for the dif fer ent types of algorithms that have appear ed in the MA literatur e. In addition, we give special attention to RC- MAs that employ cr ossover local sear ch pr ocedur es. In Section 4, we describe our pr o- posal for an XHC model. In Section 5, we pr esent the RCMA model, based on the use 274 Evolutionary Computation olume 12, Number
Page 3
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing of XHC. In Section 6, we described the experiments carried out in or der to determine the suitability of our appr oach. Finally we pr esent our

conclusions in Section 7. Crossover Operators for RCGAs The cr ossover operator has always been egar ded as fundamental sear ch operator in GAs (De Jong and Spears, 1992; Kita, 2001) since it exploits information about the sear ch space that is curr ently available in the population. Much ef fort has been given to devel- oping sophisticated cr ossover operators, and as esult, many dif fer ent versions have been pr oposed (eg. Deb, 2001; Herr era, Lozano and er degay 1998; Herr era, Lozano and anchez, 2003). Real-coding of solutions for numerical pr oblems of fers the pos- sibility of

dening wide variety of special eal-parameter cr ossover operators which can take advantage of its numerical natur e. Most of these cr ossover operators dene pr obability distribution of “of fsprings solutions based on some measur of distance among the par ent solutions. If the par ents ar located closely to each other the of f- spring generated by the cr ossover might be densely distributed ar ound the par ents. On the other hand, if the par ents ar located far away fr om each other then the of f- springs will be sparsely distributed ar ound them. Ther efor e, these operators

t their action range depending on the diversity of the population and if they use specic infor mation held by the par ents. In this way the curr ent level of diversity in the population determines if they will favour the pr oduction of additional diversity (diver gence) or the enement of the solutions (conver gence). This behaviour is achieved without e- quiring an external adaptive mechanism. In fact, in the ecent past, RCGAs with some of these cr ossovers have been demon- strated to exhibit self-adaptive behaviour similar to that observed in evolution strate- gies and

evolutionary pr ogramming appr oaches (Deb and Beyer 2001; Kita, 2001). Mor eover Beyer and Deb (2001) ar gue that variation operator that harnesses the dif fer ence between the par ents in the sear ch space is essential for the esulting evo- lutionary algorithm to exhibit self-adaptive behaviour on the population level. Usually eal-parameter cr ossover operators ar applied to pairs of chr omosomes, generating two of fspring for each one of them, which ar then intr oduced in the popu- lation (Herr era, Lozano and anchez, 2003). However multipar ent cr ossover operators have been pr oposed

which combine the featur es of mor than two par ents for generat- ing the of fspring (Deb, Anand and Joshi, 2002; Kita, Ono and Kobayashi, 1999; sutsui, amamura and Higuchi, 1999;). Furthermor e, cr ossover operators with multiple descen- dants have been pr esented (Deb, Anand and Joshi 2002; Herr era, Lozano and er degay 1996; Satoh, amamura and Kobayashi, 1996; alters, 1998) and these pr oduce mor than two of fspring for each gr oup of par ents. In this case, an of fspring selection strat- egy limits the number of of fspring that will become population members. The most common strategy

selects the best of fspring as elements for the next population. In this paper we pr opose new cr ossover operator that extends the BLX- cr ossover operator pr esented by Eshelman and Schaf fer (1993). It is called par ent-centric BLX- (PBX- and is described as follows. Let us assume that and ar two eal-coded chr omosomes that have been selected to apply the cr ossover operator to them. PBX- generates (ran- domly) one of these two possible of fspring: or wher is randomly (uniformly) chosen number fr om the interval with max and min and is chosen fr om with Evolutionary Computation olume 12,

Number 275
Page 4
M. Lozano, Herr era, N. Krasnogor D. Molina max and min wher This operator has the following featur es: It is par ent-centric cr ossover operator because it assigns mor pr obability for cr eating of fspring near par ents than anywher else in the sear ch space. Studies carried out in Deb, Anand and Joshi (2002) have shown that these operators arise as meaningful and ef cient way of solving eal-parameter optimization pr oblems. The degr ee of diversity induced by PBX- may be easily adjusted by means of varying its associated operator parameter The gr eater the

value is, the higher the variance (diversity) intr oduced into the population. This operator assigns childr en solutions pr oportional to the spr ead of par ent so- lutions. Ther eby it gives to the RCGAs that use it the potential to exhibit self- adaptation. Real-Coded Memetic Algorithms In this paper the combination of RCGAs with some type of LS mechanism is denoted RCMAs. They ar motivated by the appar ent need to employ both global and LS strat- egy to pr ovide an ef fective global optimisation method (Hart, 1994). RCMA instances pr esented in the literatur may be classied into two

dif fer ent gr oups: Hybrid RCGAs They use ef cient LIPs on continuous domains, e.g., hill-climbers for nonlinear optimisation (such as Quasi-Newton, conjugate gradient, SQP ran- dom linkage, and Solis and ets) to ef ciently ene solutions. Examples ar found in Hart (1994), Hart, Rosin, Belew and Morris (2000), Joines and Kay (2002), Houck, Joines, Kay and ilson (1997), uhlenbein, Schomisch and Born (1991), Renders and Bersini (1994), Renders and Flasse (1996), Rosin, Halliday Hart and Belew (1997), and Zhang and Shao (2001). common way to use an LIP in hybrid RCGAs is

to apply it to every member of each population. The esulting solutions eplace the population members, and ar used to generate the next population under selection and ecombination (so-called Lamar ckian evolution). An important variation on this schema is the use of small LS pr obability (Hart, 1994), i.e., the LIP is only applied to members with some (typically small) xed pr obability Mor eover ther is an alternative to Lamar ckian evolution, the Darwinian evolution, in which the solution esulting fr om LIP is discar ded, only its tness inuences the sear ch, changing

the tness landscape. dif fer ent type of hybridisation between LIPs and RCGAs concerns the constr uc- tion of new classes of evolutionary algorithms, which ar designed using the foun- dational ideas of LIPs. wo examples ar the evolutionary pattern sear ch algo- rithm (Hart, 2001a; Hart, 2001b) and the evolutionary gradient sear ch pr ocedur (Salomon, 1998). RCMAs with cr ossover-based LS algorithms The cr ossover operator is ecombina- tion operator that pr oduces elements ar ound the par ents. For that eason, it may be consider ed to be move operator for an LS strategy In addition, as

we mentioned in Section 2, ther ar special eal-parameter cr ossovers having self-adaptive na- tur in that they can generate of fspring adaptively accor ding to the distribution 276 Evolutionary Computation olume 12, Number
Page 5
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing of par ents without any adaptive parameter ith the passing of generations, the RCGA loses diversity due to the selective pr essur e. Under this cir cumstance, these cr ossovers cr eate of fspring distributed densely ar ound the par ents, favouring lo- cal tuning. Ther efor e, such operators arise as

appr opriate candidates for building cr ossover -based LS algorithms (XLS). The next section eviews instances of RC- MAs that work with XLS. Dif fer ent RCMA instances based on XLS have been pr oposed in the literatur e. They include the following: Minimal generation gap (MGG). The steady-state RCGA model was originally sug- gested by Satoh, amamura and Kobayashi (1996) and later used in number of studies (Kita, Ono and Kobayashi, 1999; sutsui, amamura and Higuchi, 1999). generation alternation is done by applying cr ossover operation times to pair of par ents randomly chosen fr om the

population. Fr om the par ents and their of fspring, the best individual is selected. In addition, random individual is se- lected using the oulette wheel technique. These two individuals then eplace the original par ents. The elite individual is selected for pr oducing selective pr essur and the random one is selected for intr oducing diversity into the population. No mutation is applied under this mechanism. Generalized generation gap (G3). Deb, Anand and Joshi (2002) modify the MGG model to make it computationally faster by eplacing the oulette-wheel selection with block selection of the

best two solutions. The G3 model also pr eserves elite solutions fr om the pr evious iteration. In G3 the ecombination and selection oper ators ar intertwined in the following manner: 1. Fr om the population select the best par ent and other par ents ran- domly 2. Generate of fspring fr om the chosen par ents using multipar ent cr ossover operator 3. Choose two elements at random fr om the population 4. Form combined sub-population of the chosen two elements and of fspring, choose the best two solutions and eplace the chosen two elements with these solutions. The justication for the

design of MMG and G3 is the following. Once stan- dar RCGA has found t ar eas of the sear ch space, it sear ches over only small fraction of the neighbour hood ar ound each sear ch point. It must derive its power fr om integrating multiple single neighbour hood explorations in parallel over suc- cessive generations of population. This many points, few neighbours strategy is in dir ect contrast to hill climber which potentially focuses ef fort on gr eater frac- tion of the sear ch neighbour hood of one point but only ar ound one point at time. This strategy might be called few points,

many neighbours (O'Reilly and Oppacher 1995). Pr ecisely MGG and G3 implement this strategy by using cr ossover opera- tors with multiple descendants. The idea is to induce an LS on the neighbour hood of the par ents involved in cr ossover In this way this type of cr ossover operators constitute an XLS. Evolutionary Computation olume 12, Number 277
Page 6
M. Lozano, Herr era, N. Krasnogor D. Molina Crossover-hill-climbing of it 1. and 2. Repeat times 3. Generate of of fspring, of performing cr ossover on and 4. Evaluate of 5. Find the of fspring with best tness value, best 6.

Replace the worst among and with best only if it is better 7. Return and Figur 1: Pseudocode algorithm for XHC Family competition (FC). The FC model of ang and Kao (2000) includes an XLS that explor es the neighbour hood of an element by applying cr ossover epeatedly with dif fer ent mates. During the FC pr ocedur e, each individual sequentially becomes the family fa- ther ith pr obability this family father and another solution randomly chosen fr om the est of the par ent population ar used as the par ents in cr ossover operation. Then the new of fspring is operated by mutation to generate an

of fspring For each family father this pr ocedur is epeated times. Finally solutions :::; ar pr oduced but only the solution with the best value of tness function survives. Later eplacement selection is used to select the better one fr om the family par ent and its best individual. The FC principle is that each individual in the population does an LS with length and only the best of fspring survives. Since solutions ar cr eated fr om the same family father and under go selection, the family competition strategy is similar to (1 selection. The authors suggested that FC is good way to

avoid pr ematur conver gence but also to keep the spirit of local sear ches. Real-Parameter Crossover Hill-Climbing Hill-climbing is LS algorithm that commences fr om single solution point. At each step, candidate solution is generated using move operator of some sort. The algo- rithm simply moves the sear ch fr om the curr ent solution to candidate solution if the candidate has better or equal tness. Cr ossover hill-climbing (XHC) was rst described by Jones (1995) and O'Reilly and Oppacher (1995) as special XLS appr oach. Its ba- sic idea is to use hill-climbing as the move

accepting criterion of the sear ch and use cr ossover as the move operator In this paper we pr esent eal-parameter XHC that maintains pair of par ents and epeatedly performs cr ossover on this pair until some number of of fspring, of is eached. Then, the best of fspring is selected and it eplaces the worst par ent only if it is better The pr ocess iterates it times and eturns the two nal curr ent par ents. This XHC model equir es values for of and it and starting pair of par ents, Although her we ar using BLX- cr ossover it must be emphasized that our model can be instantiated with any

other standar eal-coded cr ossover Figur shows the pseudocode algorithm for XHC. 278 Evolutionary Computation olume 12, Number
Page 7
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing An XHC instance may be obtained using the eal-parameter cr ossover operator pr esented in Section 2.1, the PBX- operator The XHC pr oposed may be conceived as micr selecto-r ecombinative RCGA model that employs the minimal population size necessary to allow the cr ossover to be appli- cable, i.e., two chr omosomes. The competition pr ocess (step 6) dif fer entiates this mech- anism fr om the

simple application of cr ossover operator with multiple descendants (Section 2). Pr ecisely our motivation was the denition of an XLS model that allows the self-adaptive capacity of eal-parameter cr ossover operator to be exploited inside the XLS itself, i.e., the design of self-adaptive XLS. The competition pr ocess may modify the curr ent pair of par ents during the XHC un, changing the spr ead of par ents. Since eal- parameter cr ossovers generate of fspring accor ding to the distribution of par ents, the conver gence or diver gence of XLS will be accomplished without any adaptive

param- eter It has been ar gued that incest should be kept to minimum within evolutionary algorithms to avoid pr ematur conver gence (Craighhurst and Martin, 1995),(Eschel- man and Schaf fer 1991), (Schaf fer Mani, Eshelman and Mathias, 1999). However it is pr ecisely conver gence what we need to achieve with local sear ch (cr ossover based in this case) in continuous domain. Hence, our methods can also be understood as pr omoting incest between the best of the curr ent par ents and the best of their of fspring. Most well-known continuous local sear chers (derivative-fr ee), such as the Solis

and ets' algorithm (Solis and ets, 1981) and the (1+1)-evolution strategy (Rechen- ber g, 1973; Schwefel, 1981), make use of explicit contr ol parameters (e.g., step sizes to guide the sear ch. In addition, they adapt the parameters, in such way that the moves being made may be of varying sizes, depending on the success of pr evious steps. The ules for updating parameters captur some lawful operation of the dynamics of the algorithm over br oad range of pr oblems. In this case, ther is an explicit parameter adaptation. The idea of employing GA models as hill-climbers is not new; Kazarlis,

Papadakis, Theocharis and Petridis (2001) pr opose the use of micr ogenetic algorithm (MGA) (GA with small population that evolves for few generations) as generalized hill- climbing operator They combine standar GA with the MGA to pr oduce hybrid ge- netic scheme. In contrast to conventional hill climbers that attempt independent steps along each axis, an MGA operator performs genetic LS The authors claimed that the MGA operator is capable of evolving paths of arbitrary dir ection leading to better so- lutions and following potential ridges in the sear ch space egar dless of their dir ection,

width, or even discontinuities. Real-Coded MA with Crossover Hill-Climbing In this section, we pr esent our pr oposed RCMA. It is steady-state RCMA that invokes eal-parameter XHC: The mission of the XHC is to obtain the best possible accuracy levels to lead the population towar the most pr omising sear ch ar eas, pr oducing an ef fective ene- ment on them. So, its principal mission is to obtain the best possible accuracy levels. accomplish this goal, our appr oach elies on an incest pr omotion mechanism. The steady-state RCMA is designed to pr omote high population diversity levels. It

attempts to induce eliability in the sear ch pr ocess, ensuring that dif fer ent pr omis- ing sear ch zones ar focused by the XHC thr oughout the un. Ther efor e, it attempts to induce eliability in the sear ch pr ocess. Evolutionary Computation olume 12, Number 279
Page 8
M. Lozano, Herr era, N. Krasnogor D. Molina 1. Select two par ents fr om the population. 2. Cr eate an of fspring using cr ossover and mutation. 3. Evaluate the of fspring with the tness function. 4. Select an individual in the population, which may be eplaced by the of fspring. 5. Decide if this individual

will be eplaced. Figur 2: Pseudocode algorithm for the SSGA model In Section 5.1, we intr oduce the foundations of steady-state MAs. In Section 5.2, we outline the dif fer ent steps that constitute the steady-state RCMA. In Section 5.3, we explain the esour ces consider ed to favour diversity in the population of this algo- rithm. Finally in Section 5.4, we pr esent an adaptive mechanism that assigns every chr omosome pr obability of being ened by XHC. 5.1 Steady-State MAs In steady-state GAs (SSGAs) usually only one or two of fspring ar pr oduced in each generation. Par ents ar

selected to pr oduce of fspring and then decision is made as to which individuals in the population to select for deletion in or der to make oom for the new of fspring. SSGAs ar overlapping systems since par ents and of fspring compete for survival. The basic algorithm step of SSGA is shown in Figur 2. These steps ar epeated until termination condition is achieved. In step 4, one can choose the eplacement strategy (e.g., eplacement of the worst, the oldest, or ran- domly chosen individual). In step 5, one can choose the eplacement condition (e.g., eplacement if the new individual is better or

unconditional eplacement). widely used combination is to eplace the worst individual only if the new individual is better will call this strategy the standard eplacement strategy In Goldber and Deb (1991), it was suggested that the deletion of the worst individuals induced high selective pr essur e, even when the par ents wer selected randomly Although SSGAs ar less common than generational GAs, Land (1998) ecom- mended their use for the design of steady-state MAs (SSGAs plus LS) because they may be mor stable (as the best solutions do not get eplaced until the newly generated solutions become

superior) and they allow the esults of LS to be maintained in the population. The LS is applied, after Step 3, on the of fspring cr eated in Step 2. Then, Steps and ar followed to addr ess the inclusion of the esulting ened solution into the population. LS need not be applied to every solution being generated, because the additional function evaluations equir ed for LS sear ch can be very expensive. Thus, parameter called LS pr obability LS is intr oduced, which determines the pr obability that LS will be invoked to ene new chr omosome. Steady-state MAs integrate global and

local sear ch mor tightly than generational MAs (Land, 1998). This interleaving of the global and local sear ch phases allows the two to inuence each other e.g., the SSGA chooses good starting points, and LS pr o- vides an accurate epr esentation of that egion of the domain. Contrarily generational MAs pr oceed in alternating stages of global and local sear ch. First, the generational GA pr oduces new population, then LS is performed. The specic state of LS is generally not kept fr om one generation to the next, though LS esults do inuence the selection of individuals.

280 Evolutionary Computation olume 12, Number
Page 9
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing 5.2 Steady-State RCMA Model The main featur es of the steady-state RCMA pr oposed ar (Figur 3): High population diversity levels ar favour ed by means of the combination of the PBX- cr ossover (Section 2) with high value for its associated parameter and the negative assortative mating strategy (Fernandes and Rosa, 2001) (Section 5.3.1). Diversity is pr omoted as well by means of the BGA mutation operator (M uhlenbein, D. Schlierkamp-V oosen, 1993) (Section 5.3.2). The

eal-parameter XHC pr esented in Section is invoked to ene the new chr o- mosomes cr eated fr om the application of cr ossover and mutation. The XHC e- quir es two starting chr omosomes to commence its operation; the rst one is the new chr omosome generated. In this study we choose the curr ent best element in the population as second par ent. However other alternatives for the selection of this par ent ar possible, such as random selection or the mating strategies of- fer ed in Fernandes and Rosa (2001), Huang (2001), and Ronald (1993). After the XHC pr ocessing, two solutions

ar eturned, the two nal par ents. The ttest par ent may impr ove the curr ent best element in the population. In this case, it will be included in the population and the old best individual will be emoved. The another par ent eturned by XHC will be intr oduced in the population following the standar eplacement strategy (Section 5.1). tness-based adaptive method is consider ed to determine the LS pr obability LS for each new chr omosome. Those chr omosomes that ar tter than the cur ent worst individual in the population eceive the highest LS value LS ). In this

way they ar ened using XHC. Chr omosomes that do not accomplish this equir ement obtain low LS value, LS 0625 which is consider ed appr opri- ate for many practical cases (Hart, 1994; Rosin, Halliday Hart and Belew (1997); Hart, Rosin, Belew and Morris, 2000). The local/global sear ch ratio ratio) shown by the RCMA (dened as the per centage of evaluations spent doing local sear ch fr om the total assigned to the algorithm's un) is governed by thr ee parameters, it of and LS The ratio determines the trade-of between the exploration abilities of the RCMA, and the exploitation

abilities of the XHC, and then, it has an important inuence on the nal perfor mance of the algorithm on particular pr oblem. The higher the values for these parameters ar e, the near er to 100% the ratio is. For complicated pr oblems, low ratio values become mor ef fective, because the exploration is favour ed by global sear ch, wher eas for non-complex pr oblems, higher ratio values ar convenient, because XHC may exploit the sear ch space during long time, taking advantage of its ability to ene solutions. ith the employ of the adaptive LS mechanism, we attempt to

adjust (as well as possible) the ratio to adequate values that allow high performance to be achieved on the particular pr oblem to be solved. 5.3 Resources to Favour Population Diversity Population diversity is cr ucial to GA 's ability to continue the fr uitful exploration of the sear ch space. When lack of population diversity takes place too early pr ematur stagnation of the sear ch is caused. Under these cir cumstances, the sear ch is likely to be trapped in local optimum befor the global optimum is found. This pr oblem, called pr ematur conver gence has long been ecognized as serious

failur mode for Evolutionary Computation olume 12, Number 281
Page 10
M. Lozano, Herr era, N. Krasnogor D. Molina 1. Initialize population. 2. While not termination-condition do 3. Use Negative-assortative-mating-strategy to select two par ents. 4. Apply PBX-crossover and BGA-mutation to cr eate an of fspring, new 5. Evaluate new 6. Invoke Adaptive- LS -mechanism to obtain LS for new 7. If (0 1) LS then 8. Find the best chr omosome in the population, best 9. Perform Crossover-hill-climbing new best of it xhc and xhc ar eturned xhc is the best). 10. Replace best with xhc only if it is

better 11. Utilize Standard-replacement-strategy to insert xhc in population. 12. Else 13. Employ Standard-replacement-strategy to insert new in population. Figur 3: Pseudocode algorithm for the Steady-State RCMA pr oposed GAs (Eshelman and Schaf fer 1991). In the MA literatur e, keeping population diversity while using LS together with GA is always an issue to be addr essed, either implicitly or explicitly (Krasnogor 2002). will now eview some of these appr oaches: uhlenbein, Schomisch and Born (1991) integrate LS pr ocedur es to distributed GAs, which keep, in parallel, several

sub-populations that ar pr ocessed by genetic algorithms, with each one being independent fr om the others. Their advantage is the pr eservation of diversity due to the semi-isolation of the sub-populations. Merz (2000) shows many dif fer ent combinations of LS and GA for the travelling salesman pr oblem while dening specic purpose cr ossover and mutation oper ators. The cr ossover used by the authors is the DPX cr ossover which was specif- ically designed to pr eserve diversity by means of keeping constant the appr opri- ately dened hamming distance between the two par

ent tours and the of fspring generated. In addition, estart technique is employed. During the un, the solu- tions contained in the population move closer together until they ar concentrated on small fraction of the sear ch space: the sear ch is said to have conver ged. The estarts perturb the population so that the points ar again far away fr om each other Thus, it epr esents an escape mechanism fr om suboptimal egions of the sear ch space. Nagata and Kobayashi (1997) describe powerful MA with an intelligent cr ossover in which the local sear cher is embedded in the genetic operator Fur

thermor e, populations that ar couple of or ders of magnitude bigger than those used by other authors wer employed, with the expected incr ease in diversity Krasnogor and Smith (2000) intr oduce hybridization scheme for an MA based on an adaptive helper that uses statistics fr om the GA 's population. Their MA is 282 Evolutionary Computation olume 12, Number
Page 11
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing composed of two optimization pr ocesses, GA and helper that is Monte Carlo method, which serves two purposes. First, when the population is diverse, it acts like

an LS pr ocedur and second, when the population conver ges, its goal is to diversify the sear ch. Krasnogor and Smith (2001) integrate two mechanisms to pr omote diversity: on one hand they employ cohort of local sear chers within the MA as each one of them “sees dif fer ent landscape. This, in turn, allows individuals to avoid the local optima of one operator by using dif fer ent local sear cher On the other hand they employ self-adaptation for the selection of which local sear cher to use at dif- fer ent stages of the sear ch. Ser ont and Bersini (2000) pr esent an MA with clustering method

that educes the total cost of LS by avoiding the multiple ediscoveries of local optima. In ad- dition, the clustering method supplies information that can be used to maintain the diversity in the population. Kemenade (1996) pr esents an MA model based on evolution strategies that captur similar ideas. Finally Parthasarathy Goldber and Burns (2001) addr ess the issue of handling ex- plicitly multimodal functions using MAs. They use the adaptive niching method via coevolutionary sharing of Goldber and ang (1997) to stably maintain di- verse population thr oughout the sear ch. As we have

mentioned, the steady-state RCMA pr oposed employs two mechanism to pr omote high degr ees of population diversity the negative assortative mating and the BGA mutation. Next, we explain their principal characteristics. 5.3.1 Negative Assortative Mating The mating selection mechanism determines the way the chr omosomes ar mated for applying the cr ossover to them (Step in Figur 2). Mates can be selected so as to favour population diversity (Craighurst and Martin, 1995; Eshelman, 1991; Fernandes and Rosa, 2001). way to do this is the negative assortative mating mechanism. As- sortative mating is

the natural occurr ence of mating between individuals of similar phenotype mor or less often than expected by chance. Mating between individuals with similar phenotype mor often is called positive assortative mating and less often is called negative assortative mating. Fernandes and Rosa (2001) assume these ideas in or der to implement par ent se- lection mechanism in the cr ossover operator rst par ent is selected by the oulette wheel method and ass chr omosomes ar selected with the same method (in our ex- periments all the par ents ar selected at random). Then, the similarity between

each of these chr omosomes and the rst par ent is computed (similarity between two eal-coded chr omosomes is dened as the Euclidean distance between them). If assortative mat- ing is negative, then the one with less similarity is chosen. If it is positive, the genome that is mor similar to the rst par ent is chosen to be the second par ent. Clearly the negative assortative mating mechanism incr eases genetic diversity in the population by mating dissimilar genomes with higher pr obability The steady-state RCMA pr oposed (Figur 3) combines the negative assortative mating

(that favours high population diversity levels) (Step 3) with the standar e- placement strategy (that induces high selective pr essur e, as mentioned in Section 5.1) (Steps 11 and 13). In this way many dissimilar solutions ar pr oduced during the un and only the best ones ar conserved in the population, allowing diverse and pr omising Evolutionary Computation olume 12, Number 283
Page 12
M. Lozano, Herr era, N. Krasnogor D. Molina solutions to be maintained. The ltering of high diversity by means of high selective pr essur has been suggested by other authors as GA strategy to

pr ovide ef fective sear ch. For example, in Shimodaira (1996), an algorithm is pr oposed employing lar ge mutation rates and population-elitist selection, and in Eshelman (1991), GA is pr o- posed which combines disr uptive cr ossover operator with conservative selection strategy 5.3.2 BGA Mutation Operator The mutation operator serves to cr eate random diversity in the population (Spears, 1993). In the case of working with eal coding, topic of major importance involves the contr ol of the pr oportion or str ength in which eal-coded genes ar mutated, i.e., the step size (B ack, 1996). Dif fer

ent techniques have been suggested for the contr ol of the step size during the RCGA 's un (Herr era and Lozano, 2000a; Smith and Fogarty 1997). One example is non-uniform mutation, which is consider ed to be one of the most suitable mutation operators for RCGAs (Herr era, Lozano, er degay 1998). Its main idea is to decr ease the step size as the execution advances. In this way it makes an uniform sear ch in the initial space and very locally at later stage, favouring local tuning. An alternative method is the one intr oduced in (Krasnogor and Smith, 2001) wher discr ete set of mutation rates

is made available to the algorithm which can self-adapt to use any of them. Some advantages of using this model wer discussed in the efer ence mentioned above and analysed further in (Smith, 2001). In our case, XHC is esponsible for the local tuning of the solutions. Hence, we eally equir mutation operator that continuously pr ovides acceptable levels of di- versity One of the mutation operators that behaves in this manner is the BGA mutation operator (M uhlenbein, D. Schlierkamp-V oosen, 1993). Let us suppose :::; :::; chr omosome and gene to be mutated. The gene, esulting fr om the

application of this operator is: ang 15 =0 wher ang denes the mutation range and it is normally set to The or sign is chosen with pr obability of 0.5 and is randomly generated with 1) 16 alues in the interval ang ang ar generated using this operator with the pr obability of generating neighbour hood of being very high. The minimum possible pr oximity is pr oduced with pr ecision of ang 15 5.4 Adaptive LS Mechanism LS typically operates over small portion of the total visited solutions. This is because the additional function evaluations equir ed for local sear ch can be very expensive.

The question naturally arises as to how best to select the solutions which will under go LS. (Land, 1998) intr oduced the concept of “snif fs”: individual solutions ar subject to limited amount of local sear ch (i.e., snif f). Mor eover those solutions that wer in the pr oximity of pr omising basin of attraction eceived (at latter stage) an extended cpu budget. ith that budget, further iterations of local sear ch wer performed. Hart (1994) addr essed this issue and pr oposed dif fer ent mechanisms for adaptively calculating the LS pr obability with which LS is applied to each new chr omosome:

Fitness-based adaptive methods use the tness information in the population to bias the LS towar individuals that have better tness. They modify the LS pr ob- ability of an individual based on the elationship of its tness to the est of the 284 Evolutionary Computation olume 12, Number
Page 13
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing population. These methods assume that individuals with better tness ar mor likely to be in basins of attraction of good local optima. Distribution-based adaptive methods use edundancy in the population to

avoid performing unnecessary local sear ches. In particular selected solutions will be far away fr om each other and ideally span as much of the sear ch space as the population itself. This helps ensur locally optimised solutions cover the sear ch space, and it tends to pr event pr ematur conver gence. Since the steady-state RCGA pr oposed attempts to maintain diverse population, we have focussed our attention on tness-based adaptive methods. In particular we have explor ed simple adaptive scheme to assign an LS pr obability value to each chr o- mosome generated by cr ossover and

mutation, new LS if new is better than or st 0625 Otherwise wher is the tness function and or st is the curr ent worst element in the population. consider that nascent chr omosome, new being better than the curr ent worst element is pr omising element, and thus, it deserves local tuning. For this eason, the adaptive appr oach ensur es that it will under go LS by means of the XHC application. In addition, the esultant chr omosomes, supposedly mor pr omising, will form part of the population. In this way the steady-state RCMA maintains chr omosomes that pr ovide pr ecise information

about the quality of tting sear ch egions. On the other side, when the above cir cumstance is not accomplished, then low value for LS is assumed for new LS 0625 ). As was observed by Hart (1994), in many cases, applying LS to as little of 5% of each population esults in faster conver gence to good solutions. The eader must note that many other adaptive MA appr oaches ar to be found, e.g., in Espinoza, Minsker and Goldber (2001), Krasnogor (2002), and Magyar Johnson and Nevalainen (2000). Experiments have carried out dif fer ent minimisation experiments on the test suite described in

Appendix in or der to determine the performance of the RCMA with XHC and to study its main featur es. have planned these experiments as follows: First, in Section 6.1, we analyse the behaviour of the RCMA varying the it param- eter with the aim of determining the mor obust value for this parameter All the posterior experiments ar accomplished using this value. In Section 6.2, we examine the ef fects on the exploration/exploitation balance e- sulting fr om the combination between the two main ingr edients of our pr oposal, the negative assortative mating strategy and the XHC. In Section 6.3, we

compar the XHC model with XLS based on cr ossover with multiple descendants. Now our purpose is to determine if the intrinsic self- adaptation of XHC eally works, inducing pr omising solution enement. In Section 6.4, we investigate whether the adaptive LS mechanism (Section 5.4) tunes the ratio of the RCMA depending on the particular pr oblem to be solved, allowing obust operation to be achieved. Evolutionary Computation olume 12, Number 285
Page 14
M. Lozano, Herr era, N. Krasnogor D. Molina In Section 6.5, we attempt to demonstrates the superiority of the pr oposed ap- pr

oach by means of an extensive comparison of the pr oposed appr oach with number of challenging competitors chosen fr om MA literatur e. Finally in Section 6.6, we examine the sear ch bias associated with the PBX- cr ossover operator with the aim of checking if this operator has any bias towar ds the center of the sear ch space. 6.1 Inuence of the it Parameter In our rst empirical study we investigate the inuence of it (number of iterations accomplished by XHC) on the performance of the RCMA pr oposed, since it has an signicant ef fect upon the ratio. In

particular we analyse the behaviour of the al- gorithm when dif fer ent values for this parameter ar consider ed. xed value for of was assumed (the number of of fspring generated fr om the curr ent pair of par ents) of ). it         1.5e-045 2.2e-049 1.1e+001 9.3e-002 8.5e+001 6.4e+000 8.9e-001 40.0 1.4e-080 2.0e-085 3.8e+000 5.6e-005 6.5e-004 2.2e-005 9.2e-001 46.0 6.5e-101 1.1e-105 2.2e+000 6.0e-004 3.8e-007 4.5e-009 1.4e+000 32.0 2.6e-110 1.1e-116 1.4e+000 8.6e-007 1.7e-008 1.1e-010 1.8e+000 22.0 4.5e-117 3.1e-123 2.8e+000 1.4e-005 2.8e-007 2.4e-012 1.1e+000 26.0

1.1e-118 3.2e-125 1.8e+000 2.2e-005 1.3e-009 2.9e-012 1.4e+000 20.0 5.2e-119 1.2e-124 1.7e+000 1.3e-004 1.1e-009 1.5e-012 1.9e+000 20.0 1.1e-115 4.1e-123 3.6e+000 6.3e-005 2.7e-004 1.0e-012 1.7e+000 18.0 3.6e-113 2.7e-119 2.5e+000 5.1e-005 1.2e-008 3.1e-013 2.7e+000 8.0 10 1.0e-106 7.2e-116 5.1e+000 2.4e-004 1.2e-010 1.2e-013 2.4e+000 12.0 15 2.8e-081 8.2e-091 6.2e+000 8.9e-003 6.0e-008 6.4e-012 3.0e+000 2.8e-014 20 3.2e-063 1.5e-070 1.4e+001 4.9e-002 2.0e-007 1.6e-009 6.9e+000 3.9e-011 50 4.8e-026 1.7e-030 2.0e+001 2.8e+000 6.9e-002 5.6e-004 2.2e+001 7.0e+000 it        

 5.6e-003 58.0 5.2e+001 1.8e+000 1.3e+003 6.4e+001 4.1e+000 42.0 1.3e-002 28.0 2.7e+001 2.4e+000 2.1e+002 5.3e+000 7.1e+000 36.0 1.3e-002 30.0 5.5e+001 7.9e-001 1.4e+002 9.2e+000 7.7e+000 40.0 2.2e-002 18.0 9.0e+001 3.5e+000 1.7e+002 1.7e+000 1.1e+001 18.0 1.8e-002 18.0 1.0e+002 2.6e+000 2.4e+002 1.2e+001 1.2e+001 24.0 1.7e-002 22.0 1.3e+002 7.7e+000 2.7e+002 1.2e+001 1.4e+001 6.0 2.2e-002 24.0 1.2e+002 4.7e+000 2.3e+002 2.5e+000 1.3e+001 10.0 2.4e-002 30.0 1.4e+002 1.5e+001 2.5e+002 9.3e+000 1.5e+001 4.0 2.8e-002 16.0 1.2e+002 7.3e+000 2.7e+002 1.2e+001 1.6e+001 6.0 10 2.6e-002 20.0 1.2e+002

1.6e+001 2.6e+002 3.3e+000 1.4e+001 16.0 15 3.0e-002 14.0 1.4e+002 1.0e+001 2.5e+002 2.3e+000 1.4e+001 18.0 20 2.9e-002 24.0 1.3e+002 1.1e+000 2.6e+002 1.2e+000 1.5e+001 10.0 50 3.9e-002 10.0 1.4e+002 2.1e+001 4.1e+002 1.1e+000 1.7e+001 4.0 able 1: Results with dif fer ent values for it have implemented an instance of RCMA that applies an XHC based on the PBX- operator (Section 2). The mutation pr obability is and the population size is 60 chr omosomes. The ass parameter associated with the negative assortative mating (Section 5.3.1) is set to high value, ass 25 have consider ed high values

for and ass with the aim of favouring the pr oduction of individuals intr oducing high diversity levels in the population. The algorithm was executed 50 times, each one with maximum of 100,000 evaluations. able shows the esults obtained for dif fer ent it values. The performance measur es used ar listed below For each pr oblem, the best values for these measur es ar printed in boldface. performance: average of the best tness function found at the end of each un. 286 Evolutionary Computation olume 12, Number
Page 15
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing

performance: value of the tness function of the best solution eached during all the uns. If the global optimum has been located thr oughout some uns, this per formance measur will epr esent the per centage of uns in which this happens (in this case, '%' sign appears along with the values for this performance measur e). visual inspection of table allows one to conclude (as expected) that the best measur for each pr oblems is eached with dif fer ent it values: For the multimodal test functions, Ras and Gr and all the eal-world pr oblems (which ar very complex), low values for it allow

the best esults to be achieved. Low it values for ce the global sear ch thr ough the steady-state RCMA, favouring the generation of diversity This conduct is essential for tackling these type of pr oblems. Higher values for it pr ovide an elongated operation of XHC. For unimodal test functions, sph Ros and ch this allows an ef fective enement of solutions to be accomplished. In (Krasnogor and Smith, 2001) self-adapting mechanism for the selection of the number of iterations of the local sear chers was intr oduced. The esults in table ar str ong indication that such mechanism might pr

ovide additional benets also to eal coded MAs. have chosen particular value for it in or der to allow the incoming study of our pr oposal and the comparison with other MA models to be easily understandable. consider that with it an acceptable obustness is achieved, with egar ds to all the it values analysed (see able 1). In many cases, the esults of fer ed with this value ar similar to the best ones. 6.2 Synergy Between Negative Assortative Mating and XHC wo important factors of the RCMA pr oposed ar the pr omotion of diversity (explo- ration) by means of the negative assortative

mating strategy (NAM) and the enement of solutions carried out by XHC (exploitation). In this section, we attempt to examine whether the combination of these two ingr edients decisively af fects the performance of the RCMA, i.e., whether ther exists syner gy between them. The syner gy will occur when the combination of NAM and XHC performs better than the sole usage of any one of them (Y oon and Moon, 2002). have used able in or der to accomplish this investigation. It shows the esults of the RCMA it when either NAM, or XHC, or both of them ar not applied by the RCMA. two-sided t-test

means of the two gr oups ar equal, means of the two gr oup ar not equal) at level of signicance 0.05 was applied in or der to ascertain if dif fer ences in the performance of the RCMA based on both NAM and XHC ar signicant when compar ed against the one for the other alternatives (RCMA without NAM, or without XHC, or without both of them). The dir ection of any signicant dif fer ences is denoted either by: plus sign (+) for an impr ovement in performance, or minus sign (-) for eduction, or no sign for non signicant dif fer ences. Evolutionary Computation olume

12, Number 287
Page 16
M. Lozano, Herr era, N. Krasnogor D. Molina !"   #   RCMA without NAM and XHC 2.6e-023+ 5.1e-025 2.2e+01+ 1.9e+01 3.4e-01+ 4.5e-02 1.0e+00 12.0 without XHC 2.0e-016+ 3.1e-017 2.0e+01+ 1.8e+01 5.7e+02+ 2.3e+02 3.1e+00+ 1.1e-7 without NAM 3.1e-114 5.6e-120 1.9e+00 1.1e-04 3.3e-05 9.7e-14 2.1e+00 8.0 with NAM and XHC 6.5e-101 1.1e-105 2.2e+00 6.0e-04 3.8e-07 4.5e-09 1.4e+00 32.0   !$   %  &  RCMA without NAM and XHC 4.0e-03 74.0 7.6e+01 3.4e+00 3.9e+02+ 2.5e+01 9.0e+00 8.0 without XHC 1.9e-02 3.1e-014 3.4e+02+ 1.2e+02 2.6e+03+

4.6e+02 1.5e+01+ 1.2e+01 without NAM 2.6e-02+ 18.0 1.7e+02+ 2.0e+00 5.0e+02+ 4.3e+01 1.5e+01+ 8.0 with NAM and XHC 1.3e-02 30.0 5.5e+01 7.9e-01 1.4e+02 9.2e+00 7.7e+00 40.0 able 2: Results with dif fer ent combinations of NAM and XHC The pr oposal (RCMA with NAM and XHC) clearly impr oves the esults of the thr ee alternatives consider ed. Thus, we may conclude that ther exists pr otable syner gy between NAM and XHC, because their combination pr oduces balance be- tween exploration and exploitation that becomes determinant for the success of this algorithm. 6.3 Analysis of the

Self-Adaptation of XHC In this paper we have pr esented the XHC model as self-adaptive XLS technique (Sec- tion 4). In addition, we have explained that the self-adaptation may be accomplished when we use eal-parameter cr ossover operator with self-adaptation abilities (Beyer and Deb, 2001) and the competition pr ocess integrated in XHC (Step in Figur 1). In this section, we attempt to determine whether the self-adaptation of XHC eally may allow an ef fective enement of the solutions to be achieved. In particular we compar an RCMA based on XHC (denoted as RCMA-XHC) with similar

algorithm that uses an XLS that simply generates of it of fspring fr om the starting pair of par ents, and eturns the two best individuals among par ents and of fspring (it will be called RCMA-XLS). Since RCMA-XHC works with of and it RCMA-XLS will generate chr omosomes for each XLS application. In or der to make an adequate comparison, we have disabled the adaptive LS mechanism and consider ed xed value for LS LS 0625 ). able contains the esults. t-test was performed to determine if ther exist dif fer ences in the performance of these two algorithms. Algorithm "    # %  

RCMA-XLS 8.9e-020+ 2.6e-021 1.5e+001+ 7.9e+000 2.4e+001+ 4.4e-001 1.6e+000 3.0e-011 RCMA-XHC 6.0e-040 3.0e-042 7.7e+000 2.0e-002 1.3e-003 1.3e-005 1.1e+000 28.0 Algorithm  $      &  RCMA-XLS 1.3e-002 22.0 1.3e+001 1.3e+000 9.5e+002+ 5.6e+001 3.4e+000 60.0 RCMA-XHC 1.5e-002 34.0 3.3e+001 3.5e+000 1.8e+002 4.2e+000 6.0e+000 46.0 able 3: Comparison between RCMA-XHC and RCMA-XLS RCMA-XHC pr ovides better performance than RCMA-XLS on the unimodal functions, sph Ros and ch For the multimodal Gr and Ras the t-test indicates non dif fer ences between the algorithms. However

RCMA-XHC achieves better performance than RCMA-XLS on these functions. For sle and ms (which ar very complex), RCMA-XHC is outperformed by RCMA-XLS. In XHC, the use of the competition pr ocess intr oduces high selective pr essur 288 Evolutionary Computation olume 12, Number
Page 17
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing and limits the ar ea wher the XHC acts, which is then exploited by the cr ossover In general, this induces pr otable behaviour on many pr oblems. Nevertheless, for the complex ones, the selective pr essur may have negative ef fects,

because it eliminates the capacity of XHC to pr oduce appr opriate jumps to locate mor pr omising ar eas in the sear ch space egions being ened. 6.4 Study of the Adaptive LS Mechanism Ther ar e, at least, two ways to study the operation of an adaptive mechanism for GAs (Spears, 1995). The rst is fr om the point of view of performance (test functions ar commonly used to evaluate performance impr ovement). The second view is quite dif- fer ent in that it ignor es performance and concentrates mor on the adaptive mechanism itself, i.e., its ability to adjust the GA

conguration accor ding to the particularities of the pr oblem to be solved. Once given these two points of view it is natural to investigate the way in which adaptive behaviour is esponsible for the performance impr ovement. In this section, we tackle the study of the adaptive LS mechanism fr om the point of view of the adaptation itself. In particular we ar inter ested in determining whether it adjusts the ratio of the RCMA pr oposed accor ding to the particularities of the pr oblem to be solved, allowing performance impr ovement to be achieved. Results ar shown in Figur 4. For every

test pr oblem, it outlines the average of the ratio found thr oughout the 50 uns of the RCMA pr oposed it ). 20 40 60 80 100 Ratio sph Ros ch Ras Gr sle heb ms Figur 4: ratio pr oduced by the RCMA Ther ar dif fer ences between the ratio values for the dif fer ent pr oblems. The highest values ar mor fr uitful for the unimodal sph Ros and ch wher eas lower values becomes mor ef fective for the most complex pr oblems, heb and ms In fact, the ratio value for sph duplicates the one for ms These dif fer ences in the ratio arise as sign of the adaptation ability (fr om the point of view of the

adaptation itself) of the adaptive LS mechanism. They conrm that this method induces ratios adjusted to the particular pr oblem to be solved. Evolutionary Computation olume 12, Number 289
Page 18
M. Lozano, Herr era, N. Krasnogor D. Molina Although this adaptive mechanism shows signs of adaptation, we have to check whether this adaptation is indeed benecial. In or der to do so, we have executed thr ee RCMA instances with the same featur es as the pr oposal, but they use xed LS value (0.0625, 0.25, and 1, espectively). able shows their esults, which may be

compar ed with the ones for the pr oposal (denoted as Adaptive in the table) by means of the t-test. LS     #   0.0625 6.0e-040+ 3.0e-042 7.7e+000+ 2.0e-002 1.3e-003 1.3e-005 1.1e+000 28.0 0.25 6.8e-057+ 6.3e-061 3.6e+000+ 1.4e-004 1.1e-006 3.7e-009 1.3e+000 40.0 7.4e-065+ 1.1e-068 2.4e+000 9.5e-004 6.6e-008 1.0e-010 1.4e+000 22.0 Adaptive 6.5e-101 1.1e-105 2.2e+000 6.0e-004 3.8e-007 4.5e-009 1.4e+000 32.0 LS      %  &  0.0625 1.5e-002 34.0 3.3e+001 3.5e+000 1.8e+002 4.2e+000 6.0e+000 46.0 0.25 1.5e-002 28.0 8.1e+001+ 2.4e+000 1.9e+002+ 1.1e+001 1.0e+001

22.0 1.7e-002 16.0 1.1e+002+ 3.2e+000 3.5e+002+ 4.8e+000 1.2e+001+ 20.0 Adaptive 1.3e-002 30.0 5.5e+001 7.9e-001 1.4e+002 9.2e+000 7.7e+000 40.0 able 4: Comparison with dif fer ent xed values for LS Favouring high ratio by using LS allows suitable esults to be obtained for the unimodal sph Ros and ch wher eas the pr oduction of low ratio con- sidering LS 0625 achieves the best performance for the multimodal Gr and for all the complex eal-world pr oblems. This means that global sear ch is well-suited for complex pr oblems and local sear ch is useful for unimodal test functions, which is

very easonable. Mor pr ecisely Figur indicates that, for each pr oblem, the pr oposal might in- duce an ratio with this tendency This may explain that, in general, it might eturn esults that ar similar to the ones for the most successful instance with xed LS val- ues (the case of Ros Ras Gr heb and ms ), or even better than all of them (the case of sph ). sum up, this study shows that the adaptation ability of the adaptive LS mech- anism allows the ratios to be adjusted accor ding to the particularities of the sear ch space, allowing signicant performance to be achieved for pr

oblems with dif fer ent dif- culties. 6.5 Comparison with Other RCMAs In this subsection, we compar the performance of the RCMA with XHC of and it with the one of other RCMAs pr oposed in the literatur e. They include: hy- brid steady-state RCMAs, the G3 model (Section 3), the family competition algorithm (Section 3) and an RCMA based on the CHC algorithm (Eshelman, 1991). Hybrid steady-state RCMA. It is simple steady-state RCMA (Figur 3), wher par ents ar selected at random and standar eplacement is consider ed. Every new chr omosome generated by PBX- and BGA mutation under goes the

Solis and ets's LS pr ocedur (Solis and ets, 1981) with LS 0625 Thr ee instances wer un with dif fer ent number of iterations for the LS (100, 1000, and 5000). They ar called SW -100, SW -1000, and SW -5000, espectively G3 Model. have implemented dif fer ent G3 instances that consider and use PBX- ). They ar distinguished by the value for (1,10, 15, 20, and 50). will denote these algorithms as G3- 290 Evolutionary Computation olume 12, Number
Page 19
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing Algorithm !"   # %  CHC 5.8e-031 3.1e-031 1.9e+001+

1.7e+001 2.0e-002+ 1.5e-003 1.6e+001+ 7.0e+000 CHC-SW -100 2.1e-014 6.7e-015 1.8e+001+ 1.6e+001 2.4e+002+ 9.4e+001 4.5e+001+ 2.9e+001 CHC-SW -1000 9.6e-025 4.6e-026 1.5e+001+ 7.3e+000 1.4e+001+ 2.5e+000 6.2e+001+ 4.0e+001 CHC-SW -5000 8.5e-063 3.1e-065 1.5e+001+ 7.4e+000 1.2e-001+ 1.6e-002 9.4e+001+ 5.0e+001 G3-1 9.0e-017 7.9e-103 2.8e+001+ 4.2e+000 8.3e+002+ 5.3e+001 7.4e+001+ 3.2e+001 G3-2 1.0e-099 7.6e-111 1.7e+001+ 6.6e-002 1.3e+002+ 6.8e-003 6.9e+001+ 3.0e+001 G3-3 6.5e-095 1.5e-104 1.1e+001+ 4.0e-005 9.3e+001+ 4.3e-005 6.9e+001+ 3.5e+001 G3-4 2.2e-089 5.6e-098 8.2e+000+ 3.4e-005

2.9e+001+ 1.8e-005 6.5e+001+ 2.0e+001 G3-5 4.8e-083 1.0e-089 1.1e+001+ 3.0e-004 5.0e+000+ 6.9e-007 6.3e+001+ 4.0e+001 G3-6 4.1e-078 1.9e-084 8.4e+000+ 1.6e-003 9.2e+000 1.6e-007 6.5e+001+ 3.1e+001 G3-7 2.1e-073 2.7e-078 9.0e+000+ 1.1e-005 4.0e+000+ 2.4e-006 6.0e+001+ 2.5e+001 G3-8 4.0e-068 1.1e-072 1.3e+001+ 1.4e-004 3.1e+000 7.2e-007 6.0e+001+ 3.3e+001 G3-9 2.7e-064 2.3e-069 8.5e+000+ 5.9e-003 2.2e+000 1.0e-006 5.0e+001+ 1.4e+001 G3-10 9.9e-062 1.3e-065 8.4e+000+ 9.1e-005 3.8e-001 3.4e-006 6.0e+001+ 3.3e+001 G3-15 3.3e-049 6.4e-053 1.2e+001+ 2.4e-003 8.5e-001 9.0e-007 5.1e+001+ 1.9e+001 G3-20

2.8e-041 7.1e-044 1.7e+001+ 5.0e-003 4.6e-002 1.3e-005 5.2e+001+ 2.5e+001 G3-50 1.2e-021 4.6e-024 2.1e+001+ 1.7e-002 3.5e-001+ 4.0e-003 4.4e+001+ 2.0e+001 SW -100 3.8e-020 5.6e-021 1.0e+001+ 1.2e-001 2.9e-007 1.1e-017 7.6e+000+ 40.0 SW -1000 6.9e-078 1.6e-175 4.5e+000+ 1.7e-012 5.0e-008 6.0e-029 6.8e+001+ 2.3e+001 SW -5000 2.9e-120 1.4e-322 4.3e+000+ 2.4e-003 4.1e-009 1.4e-026 1.1e+002+ 5.9e+001 FC 1.5e-006+ 3.7e-007 2.3e+001+ 2.1e+001 1.1e+002+ 4.5e+001 5.5e+000+ 2.2e+000 RCMA-XHC 6.5e-101 1.1e-105 2.2e+000 6.0e-004 3.8e-007 4.5e-009 1.4e+000 32.0 Algorithm  $      &' CHC

6.5e-003 42.0 3.9e+001 7.7e-001 3.3e+002+ 3.2e+000 1.7e-018 9.1e-021 CHC-SW -100 3.4e-003 6.0 1.4e+001 3.7e+000 1.5e+002 5.3e+000 5.0e+000 2.1e-015 CHC-SW -1000 2.0e-002 4.4e-016 1.5e+002+ 3.6e+001 6.6e+002+ 3.4e+001 1.6e+001+ 5.2e-003 CHC-SW -5000 4.4e-002 7.4e-003 3.6e+002+ 1.7e+002 1.4e+003+ 4.4e+002 2.0e+001+ 1.2e+001 G3-1 5.1e-001 7.8e-014 3.8e+002+ 3.2e+001 1.9e+003+ 4.7e+001 2.1e+001+ 1.1e+001 G3-2 2.7e-001 2.0 2.2e+002+ 3.1e+001 7.8e+002+ 4.3e+001 1.8e+001+ 4.0 G3-3 2.3e-001 1.1e-016 2.1e+002+ 3.2e+001 7.1e+002+ 5.6e+001 1.7e+001+ 4.0 G3-4 4.5e-002 8.0 1.5e+002+ 1.1e+001 8.3e+002+

1.0e+001 1.9e+001+ 5.0e-028 G3-5 3.1e-002 8.0 1.9e+002+ 7.4e+000 8.2e+002+ 1.4e+001 1.7e+001+ 6.0 G3-6 3.4e-002 4.0 1.4e+002+ 1.3e+001 4.9e+002+ 4.3e+000 1.7e+001+ 4.0 G3-7 3.6e-002 16.0 1.6e+002+ 4.6e+000 5.6e+002+ 6.6e+000 1.6e+001+ 8.0 G3-8 2.6e-002 16.0 1.3e+002+ 8.5e+000 4.3e+002+ 4.9e+000 1.6e+001+ 10.0 G3-9 2.9e-002 4.0 1.4e+002+ 6.0e+000 5.4e+002+ 3.2e+001 1.7e+001+ 2.0 G3-10 2.5e-002 8.0 1.4e+002+ 7.2e+000 6.3e+002+ 1.9e+001 1.6e+001+ 8.0 G3-15 1.7e-002 20.0 1.2e+002+ 9.5e+000 4.2e+002+ 2.6e+001 1.5e+001+ 8.0 G3-20 2.1e-002 12.0 1.0e+002+ 3.4e+000 3.3e+002+ 1.4e+001 1.5e+001+ 10.0

G3-50 2.0e-002 26.0 6.9e+001 3.0e+000 2.9e+002+ 3.3e+000 1.2e+001+ 18.0 SW -100 2.7e-002 14.0 9.1e+000 6.9e-001 1.0e+002 3.0e+000 1.2e+001+ 9.1e+000 SW -1000 4.9e-004 5.6e-016 1.1e+002+ 4.5e+000 3.6e+002+ 8.4e+000 1.5e+001+ 8.4e+000 SW -5000 2.9e-003 2.6e-015 3.3e+002+ 5.4e+001 1.1e+003+ 2.2e+002 2.1e+001+ 1.4e+001 FC 3.5e-004 2.2e-005 2.6e+001 7.0e+000 3.9e+002+ 3.9e+001 1.1e+001+ 9.7e-003 RCMA-XHC 1.3e-002 30.0 5.5e+001 7.9e-001 1.4e+002 9.2e+000 7.7e+000 40.0 able 5: Comparison with other RCMA models Family competition. An FC instance has been built considering the original genetic

operators and values for the contr ol parameters pr ovided in ang and Kao (2000). Hybrid CHC Algorithm. CHC has arisen as efer ence point in the GA literatur (Herr era and Lozano, 2000b; Whitley Rana, Dzubera and Mathias, 1996). Her e, it is consider ed as an alternative to steady-state RCGAs, because it is based on selection strategy Furthermor e, it is very adequate for the design of RCMAs since incorporates dif fer ent techniques to pr omote high population diversity have used eal-coded implementation of CHC (a full description is found in Herr era and Lozano, 2000b) that applies PBX- ). In

addition, we have combined CHC with the Solis and ets' LS pr ocedur e, pr oducing an RCMA called CHC-SW Each time CHC generates chr omosome by cr ossover the LS is applied with LS 0625 Thr ee instances wer un varying the number of iterations Evolutionary Computation olume 12, Number 291
Page 20
M. Lozano, Herr era, N. Krasnogor D. Molina assigned to the LS pr ocedur e. They will be called, CHC-SW -100, CHC-SW -1000, and CHC-SW -5000. able shows the esults. have included the esults for the CHC algorithm (without LS). The pr oposal will be eferr ed to as RCMA-XHC. t-test was performed

to ascertain if dif fer ences in the performance for RCMA-XHC ar signicant when compar ed against the ones for the other algorithms. The dir ection of any dif fer ences will be denoted as in Section 6.2. may emark that, in general, RCMA-XHC outperforms all the other algorithms. Only CHC-SW -100 and SW -5000 signicantly impr ove the esults of RCMA-XHC on thr ee pr oblems and CHC and SW -1000 on two pr oblems. Ther efor e, we may summa- rize that the pr oposal is very competitive with state-of-the-art RCMAs. 6.6 Analysis of the Sampling Bias of PBX- Real-parameter cr ossover

operators use the variation of the population to constrain the sear ch and bias the distribution of of fspring (i.e., sampling bias ). Some eal-parameter cr ossover operators, such as BLX- (Eshelman and Schaf fer 1993), have sampling bias that favours the generation of of fspring being gather ed towar ds the center of the egion cover ed by the par ental population. In other wor ds, they tend to sear ch the interpolative egion intensively Fogel and Beyer (1995) and Deb, Anand and Joshi (2002) have shown that the typi- cal initialisation used to compar evolutionary algorithms can give false impr

essions of elative performance when applying cr ossover operators with this type of bias. If the global optimum is located in the centr of the sear ch egion cover ed by initialising the population uniformly at random, these cr ossover operators generate of fspring which ar essentially unbiased estimates of the global optimum. In particular they may e- combine par ents fr om opposite sides of the origin, placing the of fspring close to the center of the initialisation egion. In other wor ds, the uniform initialisation technique is assumed to intr oduce bias that favours successful

identication of the global opti- mum, such that the success of strategy employing these cr ossover operators might be just an artefact of useful combination of initialisation and global optimum location. In this section, we analyze the PBX- cr ossover operator (Section 2) fr om the point of view of sampling bias. In particular we attempt to check if this operator has any bias towar ds the center of the sear ch space. This is accomplished by following Ange- line (1998), Eiben and ack (1997), and Gehlhaar and Fogel (1996), who have all ec- ommended an initialization in egions that expr

essly do not include the optima during testing to verify esults obtained for symmetric initialisation schemes. Thus, in or der to study the bias of PBX- we have carried out two additional experiments on the test functions in which the optimum lies at i.e., sph ch Ras and Gr First, using the typical symmetric about the origin initialization, and second, starting fr om skewed initialisation wher the initial population is located in subset of the sear ch space far apart fr om the global optimum. assumed the initialisation intervals shown in able 6. If the performance of an evolutionary algorithm

based on PBX- varied extr emely little under these two initialization methods, then we may believe that this operator does not show an inher ent af nity towar ds the center of the sear ch space. have un thr ee instances of our RCMA (Figur 3) using xed values for LS (0.0625, 0.25, and 1, espectively). The adaptive LS mechanism was disabled with the aim of ensuring that the comparison between the two initialisation methods is made under an equitable number of calls for the XHC pr ocedur e. The esults ar included in able 7. 292 Evolutionary Computation olume 12, Number
Page

21
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing est Function Range sph [4 5] ch [60 65] Ras [4 5] Gr [580 600] able 6: Ranges for each test function Initialization LS value        method symmetric 0.0625 6.0e-40 3.0e-42 1.3e-03 1.3e-05 1.1e+00 28 1.5e-02 34 skewed 0.0625 1.7e-41 1.7e-41 4.1e-04 4.1e-04 4.3e+00 30 1.7e-02 28 symmetric 0.25 6.8e-57 6.3e-61 1.1e-06 3.7e-09 1.3e+00 40 1.5e-02 28 skewed 0.25 1.4e-56 1.3e-59 7.7e-06 1.0e-08 6.0e+00 24 2.2e-02 18 symmetric 0.5 7.4e-65 1.1e-68 6.6e-08 1.0e-10 1.4e+00 22 1.7e-02 16 skewed 0.5 3.8e-64 1.1e-68

3.8e-03 2.5e-10 1.3e+01 10 2.4e-02 26 able 7: Results for the study of the sear ch bias of PBX- have analysed these esults by means of t-test and we have observed that no signicant impact on the nal solution accuracy is observed for all but one objec- tive function, namely Ras wher the symmetric initialisation allows the best esults to be eached for all the LS values consider ed. Thus, in general, these additional ex- periments clarify that PBX- does not cause sear ch bias towar ds the origin of the coor dinate system in the case of domains of variables which ar symmetric ar

ound zer o. For the Ras function, the skewed initialisation adds new challenges to the RCMA instances because they must over come number of local minima to each the global basin. This featur causes certain per centage of uns to stagnate in local optima, deteriorating the esults. This does not occur with the symmetric initialisation, since this method helps the algorithms to locate the global basin mor easily Conclusions This paper pr esented an RCMA model that applies an XHC to the solutions being gen- erated by the genetic operators. The XHC attempts to obtain the best possible accuracy

levels by iteratively pr omoting incest, wher eas the RCMA incorporates mechanisms aimed to induce eliability in the sear ch pr ocess. In addition, an adaptive mechanism was employed that determines the pr obability with which solutions ar ened with XHC. The principal conclusions derived fr om the esults of the experiments carried out ar the following: The pr oposal impr oves the performance of other RCMA appr oaches which have appear ed in the MA literatur (on the test suite consider ed in this paper). This success is partly possible thanks to the combination of the exploration pr op-

erties of the negative assortative mating strategy and the enement of solutions carried out by XHC. The adaptive LS mechanism tunes the ratio to pr oduce obust operation for test functions with dif fer ent characteristics. Evolutionary Computation olume 12, Number 293
Page 22
M. Lozano, Herr era, N. Krasnogor D. Molina The self-adaptive behavior of XHC works adequately on many cases, nevertheless some dif culties appear ed on some of the mor complex pr oblems. In essence, RCMAs based on cr ossover -based local sear chers ar very pr omising and indeed worth further

study ar curr ently extending our investigation to dif fer ent test-suites and eal-world pr oblems. Also we intend to: est alternative XHC designs, which may be based on dif fer ent eplacement strate- gies (Step in Figur 1) or other eal-parameter cr ossover operators (e.g the multi- par ent cr ossovers (Section 2). Incorporate the adaptive LS mechanism in other types of RCMAs and benchmark their performance, Build RCMAs that apply dif fer ent types of LS pr ocedur es along with XHC. Adapt the of and it parameters during the un employing mechanism similar to the one described in (Krasnogor and

Smith, 2001). Investigate the so called “cr ossover -awar local sear ch and “mutation-awar local sear ch (Krasnogor 2002). Study the sensitivity of the BGA mutation operator and and ass parameters used by our RCMA. Acknowledgments This esear ch has been supported by the pr oject TIC2002-04036-C05- 01 Appendix A. est Suite The test suite that we have used for the experiments consists of six test functions and thr ee eal-world pr oblems. They ar described in Subsections A.1 and A.2, espectively est Functions have consider ed six fr equently used test functions: Spher model ph (De Jong, 1975;

Schwefel, 1981), Generalized Rosenbr ock' function Ros (De Jong, 1975), Schwe- fel' Pr oblem 1.2 ch (Schwefel, 1981), Generalized Rastringin' function Ras (B ack, 1992; orn and Antanas, 1989), Griewangk' function Gr (Griewangk, 1981). Figur shows their formulation. The dimension of the sear ch space is 25. sph is continuous, strictly convex, and unimodal function. Ros is continuous and unimodal function, with the optimum located in steep parabolic valley with at bottom. This featur will pr obably cause slow pr ogr ess in many algorithms since they must continually change their sear ch

dir ection to each the optimum. This function has been consider ed by some authors to be eal challenge for any continuous function optimization pr ogram (Schlierkamp-V oosen and uhlenbein, 1994). gr eat part of its dif culty lies in the fact that ther ar nonlinear interactions between the variables, i.e., it is nonseparable (Whitley Rana, Dzubera and Mathias, 1996). 294 Evolutionary Computation olume 12, Number
Page 23
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing     sph =1 Ros =1 (100 +1 1) sph sph (0 0) Ros Ros (1 1) # %  ch =1 =1 Ras =1 cos( ch

ch (0 0) 10 Ras Ras (0 0)   Gr =1 =1 cos 4000 Gr Gr (0 0) Figur 5: est functions ch is continuous and unimodal function. Its dif culty concerns the fact that sear ching along the coor dinate axes only gives poor rate of conver gence, since the gradient of ch is not oriented along the axes. It pr esents similar dif culties to Ros but its valley is much narr ower Ras is scalable, continuous, and multimodal function, which is made fr om ph by modulating it with cos Gr is continuous and multimodal function. This function is dif cult to optimize because it is

non-separable and the sear ch algorithm has to climb hill to each the next valley Nevertheless, one undesirable pr operty exhibited is that it becomes easier as the dimensionality is incr eased (Whitley Rana, Dzubera and Mathias, 1996). GA does not need too much diversity to each the global optimum of sph since ther is only one optimum which could be easily accessed. On the other hand, for multimodal functions Ras and Gr ), the diversity is fundamental for nding way to lead towar ds the global optimum. Also, in the case of Ros and ch diversity can help to nd solutions close to

the parabolic valley and so avoid slow pr ogr ess. Real-W orld Problems have chosen the following thr ee eal-world pr oblems, which, in or der to be solved, ar translated to optimization pr oblems of parameters with variables on continuous do- mains: Systems of Linear Equations (Eshelman, Mathias and Schaef fer 1997), Fr equency Modulation Sounds Parameter Identication Pr oblem (T sutsui and Fujimoto, 1993), and Polynomial Fitting Pr oblem (Storn and Price, 1995). They ar described below Systems of Linear Equations The pr oblem may be stated as solving for the elements of vector given

the matrix and vector in the expr ession: The evaluation function used for these experiments is: sle =1 =1 ij Evolutionary Computation olume 12, Number 295
Page 24
M. Lozano, Herr era, N. Krasnogor D. Molina Clearly the best value for this objective function is sle Inter -parameter linkage (i.e., nonlinearity) is easily contr olled in systems of linear equations, their non- linearity does not deteriorate as incr easing numbers of parameters ar used, and they have pr oven to be quite dif cult. have consider ed 10-parameter pr oblem instance. Its matrices ar the follow- ing: 40

50 47 59 45 35 53 50 55 40 Fr equency Modulation Sounds Parameter Identication Pr oblem The pr oblem is to specify six parameters of the fr equency modulation sound model epr e- sented by sin sin sin ))) with 100 The tness function is dened as the summation of squar err ors between the evolved data and the model data as follows: ms 100 =0 )) wher the model data ar given by the following equation: sin (5 sin (4 sin (4 ))) Each parameter is in the range -6.4 to 6.35. This pr oblem is highly complex mul- timodal one having str ong epistasis, with minimum value ms

Polynomial Fitting Pr oblem This pr oblem lies in nding the coef cients of the following polynomial in =0 is integer such that 1] for 1] and (1 2) (1 2) and 2) 2) wher is Chebychev polynomial of degr ee The solution to the polynomial tting pr oblem consists of the coef cients of This polynomial oscillates between and when its ar gument is between and 1. 296 Evolutionary Computation olume 12, Number
Page 25
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing Outside this egion the polynomial rises steeply in dir ection of high positive or dinate

values. This pr oblem has its oots in electr onic lter design and challenges an optimiza- tion pr ocedur by for cing it to nd parameter values with gr ossly dif fer ent magnitudes, something very common in technical systems. The Chebychev polynomial employed her is: 32 160 256 128 So, it is nine-parameter pr oblem. The pseudocode algorithm shown below was used in or der to transform the constraints of this pr oblem into an objective function to be minimized, called hev consider that is the solution to be evaluated and =0 Choose 100 fr om 1] For :::; 100 do If or 1) then (1 ))

If (1 2) (1 2) 0) then (1 2) (1 2)) If 2) 2) 0) then 2) 2)) Return Each parameter (coef cient) is in the range -512 to 512. The objective function value of the optimum is hev References Angeline, .J. (1998). Using selection to impr ove particle swarm optimization. In Pr oc. of the In- ternational Congr ess on Evolutionary Computation 1998 pages 84–89. ack, (1992). Self-adaptation in genetic algorithms. In ar ela, .J. and Bour gine, ., editors, Pr oc. of the First Eur opean Confer ence on Articial Life pages 263–271, MIT Pr ess, Cambridge, MA. ack, ., Sch utz, M. and Khuri, S.

(1996). Evolution strategies: an alternative evolution compu- tation method. In Alliot, J.-M., Lutton, E., Ronald, E., Schoenauer M., Snyers, D., Editors, Articial Evolution pages 3–20, Springer Berlin. Beyer H.-G. and Deb, K. (2001). On self-adaptive featur es in eal-parameter evolutionary algo- rithms. IEEE ransactions on Evolutionary Computation 5(3): 250–270. Craighurst, R. and Martin, (1995). Enhancing GA performance thr ough cr ossover pr ohibitions based on ancestry In Eshelman, L.J., editor Pr oc. 6th Int. Conf. Genetic Algorithms pages 130 137, Mor gan Kaufmann, San Mateo,

California. Davis, L. (1991). Handbook of Genetic Algorithms an Nostrand Reinhold, New ork. Deb, K. (2001). Multi-Objective Optimization using Evolutionary Algorithms John iley Sons, New ork. Deb, K. and Beyer H. (2001). Self-adaptive genetic algorithms with simulated binary cr ossover Evolutionary Computation Journal 9(2): 195–219. Deb, K., Anand, A. and Joshi, D. (2002). computationally ef cient evolutionary algorithm for eal-parameter evolution. Evolutionary Computation Journal 10(4): 371–395. De Jong, K.A. (1975). An Analysis of the Behavior of Class of Genetic Adaptive Systems

Doctoral dis- sertation, University of Michigan, Ann Arbor Dissertation Abstracts International, 36(10), 5140B (University Micr olms No 76-9381). De Jong, K.A. and Spears, .M. (1992). formal analysis of the ole of multi-point cr ossover in genetic algorithms. Annals of Mathematics and Articial Intelligence 5(1): 1–26. Evolutionary Computation olume 12, Number 297
Page 26
M. Lozano, Herr era, N. Krasnogor D. Molina Dietzfelbinger M., Naudts, B., an Hoyweghen, C. and egener I. (2003). The Analysis of Recombinative Hill-Climber on H-IFF IEEE ransactions on Evolutionary

Computation 7(5): 417–423. Eiben, A.E. and ack, (1997). Empirical investigation of multipar ent ecombination operators in evolution strategies. Evolutionary Computation 5(3): 347–365. Eshelman, L.J. (1991). The CHC adaptive sear ch algorithm: how to have safe sear ch when engag- ing in nontraditional genetic ecombination. In Rawlin, G.J.E., editor Foundations of Genetic Algorithms pages 265–283, Mor gan Kaufmann, San Mateo, California. Eshelman, L.J. and Schaf fer J.D. (1991). Pr eventing pr ematur conver gence in genetic algorithms by pr eventing incest. In Belew R. and Booker L.B., editors,

Pr oc. of the Fourth Int. Conf. on Genetic Algorithms pages 115–122, Mor gan Kaufmann, San Mateo, California. Eshelman, L.J. and Schaf fer J.D. (1993). Real-coded genetic algorithms and interval-schemata. In Whitley L.D., editor Foundations of Genetic Algorithms pages 187–202, Mor gan Kaufmann, San Mateo, California. Eshelman, L.J. Mathias, K.E. and Schaf fer J.D. (1997). Conver gence contr olled variation. In Belew R. and ose, M., editors, Foundations of Genetic Algorithms pages 203–224, Mor gan Kauf- mann, San Mateo, California. Espinoza, .P ., Minsker B.S. and Goldber g, D.E. (2001). self

adaptive hybrid genetic algorithm. In Pr oceedings of the Genetic and Evolutionary Computation Confer ence 2001 pages 759, Mor gan Kaufmann, San Mateo, California. Fernandes, C. and Rosa, A. (2001). Study on non-random mating and varying population size in genetic algorithms using oyal oad function. In Pr oc. of the 2001 Congr ess on Evolutionary Computation pages 60–66, IEEE Pr ess, Piscataway New Jersey Fogel, D.B. and Beyer H.-G. (1995). note on the empirical evaluation of intermediate ecombi- nation. Evolutionary Computation 3(4): 491–495. Gehlhaar D. and Fogel, D. (1996). uning

evolutionary pr ogramming for conformationally ex- ible molecular docking. In Fogel, L., Angeline, and ack, ., editors, Pr oc. Of the Fifth Annual Confer ence on Evolutionary Pr ogramming pages 419-429, MIT Pr ess, Cambridge, MA. Goldber g, D.E. and Deb, K. (1991). comparative analysis of selection schemes used in genetic algorithms. In Rawlins, G.J.E., editor Foundations of Genetic Algorithms pages 69–93, Mor gan Kaufmann, San Mateo, California. Goldber g, D.E. and ang, L. (1997). Adaptive niching via coevolutionary sharing. In Quagliar ella et al., editors, Genetic Algorithms in

Engineering and Computer Science pages 21–38, John i- ley and Sons, New ork. Goldber g, D.E. and oessner S. (1999). Optimizing global-local sear ch hybrids. In Banzhaf, ., et al., editors, Pr oceedings of the Genetic and Evolutionary Computation Confer ence'99 pages 220–228, Mor gan Kaufmann, San Mateo, California. Griewangk, A.O. (1981). Generalized descent of global optimization. Journal of Optimization The- ory and Applications 34: 11–39. Hart, .E. (1994). Adaptive Global Optimization with Local Sear ch PhD Thesis, University of Cali- fornia, San Diego, California. Hart, .E., Rosin, C.R.,

Belew R.K. and Morris, G.M. (2000). Impr oved evolutionary hybrids for exible ligand docking in autodock. In Pr oc Intl Conf on Optimization in Computational Chem and Molec Bio. pages 209–230. Hart, .E. (2001a). conver gence analysis of unconstrained and bound constrained evolution- ary pattern sear ch. Evolutionary Computation 9(1): 1–23. 298 Evolutionary Computation olume 12, Number
Page 27
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing Hart, .E. (2001b). Evolutionary pattern sear ch algorithms for unconstrained and linearly con- strained optimization. IEEE

trans. Evolutionary Computation 5(4): 388–397. Herr era, ., Lozano, M. and er degay J.L. (1996). Dynamic and Heuristic Fuzzy Connectives- Based Cr ossover Operators for Contr olling the Diversity and Conver gence of Real Coded Genetic Algorithms. Int. Journal of Intelligent Systems 11: 1013–1041. Herr era, ., Lozano, M. and er degay J.L. (1998). ackling eal-coded genetic algorithms: opera- tors and tools for the behavioral analysis. Articial Intelligence Reviews 12(4): 265–319. Herr era, ., Lozano, M. (2000a). wo-loop eal-coded genetic algorithms with adaptive contr ol of mutation step

sizes. Applied Intelligence 13: 187–204. Herr era, and Lozano, M. (2000b). Gradual distributed eal-coded genetic algorithms. IEEE ransactions on Evolutionary Computation 4(1): 43–63. Herr era, ., Lozano, M., er ez, E., anchez, A.M. and illar (2002). Multiple cr ossover per cou- ple with selection of the two best of fspring: an experimental study with the BLX- cr ossover operator for eal-coded genetic algorithms. In Garijo, .J., Riquelme, and J.C., or o, M., edi- tors, IBERAMIA 2002 Lectur Notes in Articial Intelligence 2527, pages 392–401, Springer erlag, Berlin. Herr era, ., Lozano,

M. and anchez A.M. (2003). taxonomy for the cr ossover operator for eal- coded genetic algorithms. An Experimental Study International Journal of Intelligent Systems 18(3): 309–338. Houck, C.R., Joines, J.A., Kay M.G. and ilson, J.R. (1997). Empirical investigation of the benets of partial lamar ckianism. Evolutionary Computation Journal 5(1): 31–60. Huang, C.-F (2001). An analysis of mate selection in genetic algorithms. In Pr oc. of the Genetic and Evolutionary Computation Confer ence (GECCO-2001) pages 766, Mor gan Kaufmann, San Mateo, California. Joines, J.A. and Kay G.M. (2002).

Hybrid genetic algorithms and random linkage. In Pr oceedings of the 2002 Congr ess on Evolutionary Computation pages 1733–1738, IEEE Pr ess, Piscataway New Jersey Jones, (1995). Cr ossover macr omutation, and population-based sear ch. In Eshelman, L., edi- tor Pr oc. of the Sixth Int. Conf. on Genetic Algorithms pages 73–80, Mor gan Kaufmann, San Francisco, California. Kazarlis, S.A., Papadakis, S.E., Theocharis, J.B. and Petridis, (2001). Micr ogenetic algorithms as generalized hill-climbing operators for GA Optimization. IEEE ransaction on Evolutionary Computation 5(3): 204–217. Kemenade,

C.H.M. (1996). Cluster evolution strategies, enhancing the sampling density function using epr esentatives. In Pr oceedings of the Third IEEE confer ence on Evolutionary Computation pages 637–642. IEEE Pr ess, Piscataway New Jersey Kita, H., Ono, I. and Kobayashi, S. (1999). Multi-par ental extension of the unimodal normal dis- tribution cr ossover for eal-coded genetic algorithms. In Pr oc. of the International Confer ence on Evolutionary Computation'99 pages 646–651, IEEE Pr ess, Piscataway New Jersey Kita, H. (2001). comparison study of self-adaptation in evolution strategies and eal-coded

genetic algorithms. Evolutionary Computation Journal 9(2): 223–241. Krasnogor N. (2002). Studies on the Theory and Design Space of Memetic Algorithms PhD Thesis, University of the est of England, Bristol, United Kingdom. Krasnogor N. and Smith, J.E (2000). Memetic Algorithm with Self-Adapting Local Sear ch: TSP as case study Pr oceedings of the 2000 International Confer ence on Genetic and Evolutionary Computation pages 987–994, Mor gan Kaufmann, San Mateo, California. Evolutionary Computation olume 12, Number 299
Page 28
M. Lozano, Herr era, N. Krasnogor D. Molina Krasnogor N. and

Smith J.E. (2001). Emer gence of Pr otable Sear ch Strategies Based on Sim- ple Inheritance Mechanism. In Pr oceedings of the 2001 International Confer ence on Genetic and Evolutionary Computation pages p. 432–439, Mor gan Kaufmann, San Mateo, California. Land, M.W .S. (1998). Evolutionary Algorithms with Local Sear ch for Combinatorial Optimisation PhD Thesis, University of California, San Diego. Magyar G., Johnsson, M. and Nevalainen, O. (2000). An adaptive hybrid genetic algorithm for the thr ee-matching pr oblem. IEEE ransactions on Evolutionary Computation 4(2): 135–146. Merz,

(2000). Memetic Algorithms for Combinatorial Optimization Pr oblems: Fitness Landscapes and Effective Sear ch Strategies PhD Thesis, University of Siegen, Germany Moscato, .A. (1989). On evolution, sear ch, optimization, genetic algorithms and martial arts: owar ds memetic algorithms. echnical Report Caltech Concurr ent Computation Pr ogram Report 826, Caltech, Caltech, Pasadena, California. Moscato, .A. (1999). Memetic algorithms: short intr oduction. In Corne, D., Dorigo, M., and Glower ., editors, New Ideas in Optimization pages 219–234, McGraw-Hill, London. uhlenbein, H., Schomisch, M. and

Born, J. (1991). The parallel genetic algorithm as function optimizer In Belew R., Booker L.B., editors, Fourth Int. Conf. on Genetic Algorithms pages 271–278, Mor gan Kaufmann, San Mateo, California. uhlenbein, H. and Schlierkamp-V oosen, D. (1993). Pr edictive models for the br eeder genetic algorithm I. continuous parameter optimization. Evolutionary Computation 1: 25–49. Nagata, and Kobayashi, S. (1997). Edge assembly cr ossover: high-power genetic algorithm for the traveling salesman pr oblem. In ack, ., editor Pr oc. of the Seventh Int. Conf. on Genetic Algorithms pages 450–457, Mor gan

Kaufmmann, San Mateo, California. O'Reilly U.-M. and Oppacher (1995). Hybridized cr ossover -based sear ch techniques for pr o- gram discovery In IEEE International Confer ence on Evolutionary Computation 1995 pages 573–578, IEEE Pr ess, Piscataway New Jersey Parthasarathy .V ., Goldber g, D.E. and Burns, S.A. (2001). ackling multimodal pr oblems in hy- brid genetic algorithms. In Whitley D., editor Pr oc. of the Genetic and Evolutionary Computa- tion Confer ence 2001 pages 775, Mor gan Kaufmann, San Francisco, California. Rechenber g, I. (1975). Evolutionsstrategie: Optimierung echnischer

Systeme nach Prinzipien der Biol- ogischen Evolution Fr ommann-Holzboog. Renders, J.M. and Bersini, H. (1994). Hybridizing genetic algorithms with hill-climbing methods for global optimization: two possible ways. In Pr oc. of The First IEEE Confer ence on Evolution- ary Computation pages 312–317, IEEE Pr ess, Piscataway New Jersey Renders J.M. and Flasse, S.P (1996). Hybrid methods using genetic algorithms for global opti- mization. IEEE ransactions on Systems, Man, and Cybernetics 26(2): 243–258. Ronald, E. (1993). When selection meets seduction. In Forr est, S., editor Pr oc. of the Fifth

Int. Conf. on Genetic Algorithms pages 167–173, Mor gan Kaufmann, San Mateo, California. Rosin, C.D., Halliday R.S., Hart, .E. and Belew R.K. (1997). comparison of global and local sear ch methods in dr ug docking. In ack, ., editor Pr oc. of the Seventh Int. Conf. on Genetic Algorithms pages 221–228, Mor gan Kaufmmann, San Mateo, California. Salomon, R. (1998). Evolutionary algorithms and gradient sear ch: Similarities and dif fer ences. IEEE ransactions on Evolutionary Computation 2(2): 45–55. Satoh, H. amamura, M. and Kobayashi, S. (1996). Minimal generation gap model for GAs con- sidering

both exploration and exploitation. In IIZUKA '96 pages 494–497. 300 Evolutionary Computation olume 12, Number
Page 29
Real-Coded Memetic Algorithms with Cr ossover Hill-Climbing Schaf fer D., Mani M., Eshelman L. and Mathias K.(1999). The Ef fect of Incest Pr evention on Ge- netic Drift. In Banzhaf, and Reeves, C., editors, Foundations of Genetic Algorithms pages 235–243, Mor gan Kaufmann, San Mateo, California. Schlierkamp-V oosen, D. and uhlenbein, H. (1994). Strategy adaptation by competing subpopu- lations. In Davidor ., Schwefel, H.-P ., anner and R., editors, Parallel Pr oblem

Solving fr om Natur pages 199–208, Springer -V erlag, Berlin, Germany Schwefel, H.-P (1981). Numerical Optimization of Computer Models iley Chichester Ser ont, G. and Bersini, H. (2000). new GA-local sear ch hybrid for continuous optimization based on multi level single linkage clustering. In Whitley D., editor Pr oc. of the Genetic and Evolutionary Computation Confer ence 2000 pages 90-95, Mor gan Kaufmann, San Francisco, California. Shimodaira, H. (1996). new genetic algorithm using lar ge mutation rates and population-elitist selection (GALME). In Pr oc. of the International Confer ence on

ools with Articial Intelligence pages 25–32, IEEE Computer Society Smith, J.E. and Fogarty .C. (1997). Operator and parameter adaptation in genetic algorithms. Soft Computing 1(2): 81-87. Smith, J.E. (2001). Modelling GAs with Self-Adapting Mutation Rates. Pr oc. of the Genetic and Evolutionary Computation Confer ence 2001 Solis, .J. and ets, R.J.-B. (1981). Minimization by random sear ch techniques. Mathematical Op- erations Resear ch 6: 19-30. Spears, .M. (1993). Cr ossover or mutation?. In Whitley L.D., editor Foundations of Genetic Algo- rithms pages 221-238, Mor gan Kaufmann, San

Mateo, California. Spears, .M. (1995). Adapting cr ossover in evolutionary algorithms. In Pr oc. of the Evolutionary Pr ogramming Confer ence 1995 pages 367-384. Storn, R. and Price, K. (1995). Dif fer ential evolution simple and ef cient adaptive scheme for global optimization over continuous spaces. echnical Report TR-95-012, International Computer Science Institute, Berkeley California. orn, A. and Antanas, Z. (1989). Global Optimization Lectur Notes in Computer Science, 350, Springer Berlin, Germany sutsui, S. and Fujimoto, (1993). Forking genetic algorithm with blocking and

shrinking modes. In Forr est, S., editor Pr oc. of the Fifth Int. Conf. on Genetic Algorithms pages 206-213, Mor gan Kaufmmann, San Mateo, California. sutsui, S., amamura, M. and Higuchi, (1999). Multi-par ent ecombination with simplex cr ossover in eal-coded genetic algorithms. In Pr oc. of the Genetic and Evolutionary Compu- tation Confer ence (GECCO-99) 657-664, Mor gan Kaufmann, San Mateo, California. alters, (1998). Repair and br ood selection in the traveling salesman pr oblem. In Eiben, A. E., ach, ., Schoenauer M. and Schwefel H.-P ., editors, Pr oceedings of the Fifth International

Confer ence on Parallel Pr oblem Solving fr om Natur pages 813–822, Springer -V erlag, Berlin, Germany Whitley D., Rana, S., Dzubera, J. and Mathias, E. (1996). Evaluating evolutionary algorithms. Articial Intelligence 85: 245–276. ang, J.-M. and Kao, C.-Y (2000). Integrating adaptive mutations and family competition into genetic algorithms as function optimiser Soft Computing 4: 89–102. oon, H.-S. and Moon, B.-R. (2002). An empirical study on the syner gy of multiple cr ossover operators. IEEE ransactions on Evolutionary Computation 6(2): 212–223. Evolutionary Computation olume 12,

Number 301
Page 30
M. Lozano, Herr era, N. Krasnogor D. Molina Zhang, C.-K. and Shao, H.-H. (2001). hybrid strategy: eal-coded genetic algorithm and chaotic sear ch. In Pr oc. IEEE International Confer ence on Systems, Man, and Cybernetics 2001 pages 2361–2364. 302 Evolutionary Computation olume 12, Number