<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">S. K. Tasoulis</style></author><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">D. K. Tasoulis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Density Based Projection Pursuit Clustering</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2012. CEC 2012. (IEEE World Congress on Computational Intelligence)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Brisbane, Australia</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Clustering of high dimensional data is a very important task in Data Mining. In dealing with such data, we typically need to use methods like Principal Component Analysis and Projection Pursuit, to find interesting lower dimensional directions to project the data and hence reduce their dimensionality in a manageable size. In this work, we propose a new criterion of direction interestingness, which incorporates information from the density of the projected data. Subsequently, we utilize the Differential Evolution algorithm to perform optimization over the space of the projections and hence construct a new hierarchical clustering algorithmic scheme. The new algorithm shows promising performance over a series of real and simulated data.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution: A hybrid approach</style></title><secondary-title><style face="normal" font="default" size="100%">Information Sciences</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><volume><style face="normal" font="default" size="100%">216</style></volume><pages><style face="normal" font="default" size="100%">50-92</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In recent years, the Particle Swarm Optimization has rapidly gained increasing popularity and many variants and hybrid approaches have been proposed to improve it. In this paper, motivated by the behavior and the spatial characteristics of the social and cognitive experience of each particle in the swarm, we develop a hybrid framework that combines the Particle Swarm Optimization and the Differential Evolution algorithm. Particle Swarm Optimization has the tendency to distribute the best personal positions of the swarm particles near to the vicinity of problem’s optima. In an attempt to efficiently guide the evolution and enhance the convergence, we evolve the personal experience or memory of the particles with the Differential Evolution algorithm, without destroying the search capabilities of the algorithm. The proposed framework can be applied to any Particle Swarm Optimization algorithm with minimal effort. To evaluate the performance and highlight the different aspects of the proposed framework, we initially incorporate six classic Differential Evolution mutation strategies in the canonical Particle Swarm Optimization, while afterwards we employ five state-of-the-art Particle Swarm Optimization variants and four popular Differential Evolution algorithms. Extensive experimental results on 25 high dimensional multimodal benchmark functions along with the corresponding statistical analysis, suggest that the hybrid variants are very promising and significantly improve the original algorithms in the majority of the studied cases.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Multimodal Optimization Using Niching Differential Evolution with Index-based Neighborhoods</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2012. CEC 2012. (IEEE World Congress on Computational Intelligence)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Brisbane, Australia</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">A new family of Differential Evolution mutation strategies (DE/nrand) that are able to handle multimodal functions, have been recently proposed. The DE/nrand family incorporates information regarding the real nearest neighborhood of each potential solution, which aids them to accurately locate and maintain many global optimizers simultaneously, without the need of additional parameters. However, these strategies have increased computational cost. To alleviate this problem, instead of computing the real nearest neighbor, we incorporate an index-based neighborhood into the mutation strategies. The new mutation strategies are evaluated on eight well-known and widely used multimodal problems and their performance is compared against five state-of-the-art algorithms. Simulation results suggest that the proposed strategies are promising and exhibit competitive behavior, since with a substantial lower computational cost they are able to locate and maintain many global optima throughout the evolution process.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">D. K. Tasoulis</style></author><author><style face="normal" font="default" size="100%">N. G. Pavlidis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Ilias Maglogiannis</style></author><author><style face="normal" font="default" size="100%">Vassilis P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">Ioannis Vlahavas</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Tracking Differential Evolution Algorithms: An Adaptive Approach through Multinomial Distribution Tracking with Exponential Forgetting</style></title><secondary-title><style face="normal" font="default" size="100%">Artificial Intelligence: Theories and Applications</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer Berlin / Heidelberg</style></publisher><volume><style face="normal" font="default" size="100%">7297</style></volume><pages><style face="normal" font="default" size="100%">214-222</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-30447-7</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Several Differential Evolution variants with modified search dynamics have been recently proposed, to improve the performance of the method. This work borrows ideas from adaptive filter theory to develop an “online” algorithmic adaptation framework. The proposed framework is based on tracking the parameters of a multinomial distribution to reflect changes in the evolutionary process. As such, we design a multinomial distribution tracker to capture the successful evolution movements of three Differential Evolution algorithms, in an attempt to aggregate their characteristics and their search dynamics. Experimental results on ten benchmark functions and comparisons with five state-of-the-art algorithms indicate that the proposed framework is competitive and very promising.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">D. K. Tasoulis</style></author><author><style face="normal" font="default" size="100%">N. G. Pavlidis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Tracking Particle Swarm Optimizers: An adaptive approach through multinomial distribution tracking with exponential forgetting</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2012. CEC 2012. (IEEE World Congress on Computational Intelligence)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Brisbane, Australia</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">An active research direction in Particle Swarm Optimization (PSO) is the integration of PSO variants in adaptive, or self-adaptive schemes, in an attempt to aggregate their characteristics and their search dynamics. In this work we borrow ideas from adaptive filter theory to develop an “online” algorithm adaptation framework. The proposed framework is based on tracking the parameters of a multinomial distribution to capture changes in the evolutionary process. As such, we design a multinomial distribution tracker to capture the successful evolution movements of three PSO variants. Extensive experimental results on ten benchmark functions and comparisons with five state-of-the-art algorithms indicate that the proposed framework is competitive and very promising. On the majority of tested cases, the proposed framework achieves substantial performance gain, while it seems to identify accurately the most appropriate algorithm for the problem at hand.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">D. K. Tasoulis</style></author><author><style face="normal" font="default" size="100%">N. G. Pavlidis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Enhancing Differential Evolution Utilizing Proximity-based Mutation Operators</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Transactions on Evolutionary Computation</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year></dates><volume><style face="normal" font="default" size="100%">15</style></volume><pages><style face="normal" font="default" size="100%">99-119</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Differential evolution is a very popular optimization algorithm and considerable research has been devoted to the development of efficient search operators. Motivated by the different manner in which various search operators behave, we propose a novel framework based on the proximity characteristics among the individual solutions as they evolve. Our framework incorporates information of neighboring individuals, in an attempt to efficiently guide the evolution of the population toward the global optimum, without sacrificing the search capabilities of the algorithm. More specifically, the random selection of parents during mutation is modified, by assigning to each individual a probability of selection that is inversely proportional to its distance from the mutated individual. The proposed framework can be applied to any mutation strategy with minimal changes. In this paper, we incorporate this framework in the original differential evolution algorithm, as well as other recently proposed differential evolution variants. Through an extensive experimental study, we show that the proposed framework results in enhanced performance for the majority of the benchmark problems studied.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Finding Multiple Global Optima Exploiting Differential Evolution’s Niching Capability</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Symposium on Differential Evolution, 2011. SDE 2011. (IEEE Symposium Series on Computational Intelligence)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">April</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Paris, France</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Handling multimodal functions is a very important and challenging task in evolutionary computation community, since most of the real-world applications exhibit highly multi-modal landscapes. Motivated by the dynamics and the proximity characteristics of Differential Evolution's mutation strategies tending to distribute the individuals of the population to the vicinity of the problem's minima, we introduce two new Differential Evolution mutation strategies. The new mutation strategies incorporate spatial information about the neighborhood of each potential solution and exhibit a niching formation, without incorporating any additional parameter. Experimental results on eight well known multimodal functions and comparisons with some state-of-the-art algorithms indicate that the proposed mutation strategies are competitive and very promising, since they are able to reliably locate and maintain many global optima throughout the evolution process.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2010. CEC 2010. (IEEE World Congress on Computational Intelligence)</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">cognitive experience</style></keyword><keyword><style  face="normal" font="default" size="100%">convergence</style></keyword><keyword><style  face="normal" font="default" size="100%">differential evolution</style></keyword><keyword><style  face="normal" font="default" size="100%">evolutionary computation</style></keyword><keyword><style  face="normal" font="default" size="100%">particle swarm optimisation</style></keyword><keyword><style  face="normal" font="default" size="100%">particle swarm optimization</style></keyword><keyword><style  face="normal" font="default" size="100%">social experience</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2010</style></year><pub-dates><date><style  face="normal" font="default" size="100%">July</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Barcelona, Spain</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In recent years, the Particle Swarm Optimization has rapidly gained increasing popularity and many variants and hybrid approaches have been proposed to improve it. Motivated by the behavior and the proximity characteristics of the social and cognitive experience of each particle in the swarm, we develop a hybrid approach that combines the Particle Swarm Optimization and the Differential Evolution algorithm. Particle Swarm Optimization has the tendency to distribute the best personal positions of the swarm near to the vicinity of problem’s optima. In an attempt to efficiently guide the evolution and enhance the convergence, we evolve the personal experience of the swarm with the Differential Evolution algorithm. Extensive experimental results on twelve high dimensional multimodal benchmark functions indicate that the hybrid variants are very promising and improve the original algorithm.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Hardware-Friendly Higher-Order Neural Network Training Using Distributed Evolutionary Algorithms</style></title><secondary-title><style face="normal" font="default" size="100%">Applied Soft Computing</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Higher-Order Neural Networks</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2010</style></year></dates><volume><style face="normal" font="default" size="100%">10</style></volume><pages><style face="normal" font="default" size="100%">398-408</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper, we study the class of Higher-Order Neural Networks and especially the Pi-Sigma Networks. The performance of Pi-Sigma Networks is evaluated through several well known Neural Network Training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically the distributed versions of the Differential Evolution and the Particle Swarm Optimization algorithms have been employed. To this end, each processor is assigned a subpopulation of potential solutions. The subpopulations are independently evolved in parallel and occasional migration is employed to allow cooperation between them. The proposed approach is applied to train Pi-Sigma Networks using threshold activation functions. Moreover, the weights and biases were confined to a narrow band of integers, constrained in the range [-32,32]. Thus, the trained Pi-Sigma neural networks can be represented by using 6 bits. Such networks are better suited than the real weight ones for hardware implementation and to some extend are immune to low amplitude noise that possibly contaminates the training data. Experimental results suggest that the proposed training process is fast, stable and reliable and the distributed trained Pi-Sigma Networks exhibited good generalization capabilities.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolutionary Adaptation of the Differential Evolution Control Parameters</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2009. CEC 2009</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">adaptive control</style></keyword><keyword><style  face="normal" font="default" size="100%">differential evolution control parameter</style></keyword><keyword><style  face="normal" font="default" size="100%">evolutionary adaptation</style></keyword><keyword><style  face="normal" font="default" size="100%">evolutionary computation</style></keyword><keyword><style  face="normal" font="default" size="100%">optimisation</style></keyword><keyword><style  face="normal" font="default" size="100%">optimization</style></keyword><keyword><style  face="normal" font="default" size="100%">self-adaptive differential evolution algorithm</style></keyword><keyword><style  face="normal" font="default" size="100%">self-adjusting systems</style></keyword><keyword><style  face="normal" font="default" size="100%">user-defined parameter tuning</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">May</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Trondheim, Norway</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This paper proposes a novel self-adaptive scheme for the evolution of crucial control parameters in evolutionary algorithms. More specifically, we suggest to utilize the differential evolution algorithm to endemically evolve its own control parameters. To achieve this, two simultaneous instances of Differential Evolution are used, one of which is responsible for the evolution of the crucial user-defined mutation and recombination constants. This self-adaptive differential evolution algorithm alleviates the need of tuning these user-defined parameters while maintains the convergence properties of the original algorithm. The evolutionary self-adaptive scheme is evaluated through several well-known optimization benchmark functions and the experimental results indicate that the proposed approach is promising.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Ming Zhang</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolutionary Algorithm Training of Higher-Order Neural Networks</style></title><secondary-title><style face="normal" font="default" size="100%">Artificial Higher Order Neural Networks for Computer Science and Engineering: Tends for Emerging Applications</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2009</style></year></dates><publisher><style face="normal" font="default" size="100%">IGI Global</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Balancing the exploration and exploitation capabilities of the Differential Evolution Algorithm</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence)</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">differential evolution algorithm</style></keyword><keyword><style  face="normal" font="default" size="100%">evolutionary computation</style></keyword><keyword><style  face="normal" font="default" size="100%">optimization</style></keyword><keyword><style  face="normal" font="default" size="100%">search problems</style></keyword><keyword><style  face="normal" font="default" size="100%">self-balancing hybrid mutation operator</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Hong Kong</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The hybridization and composition of different Evolutionary Algorithms to improve the quality of the solutions and to accelerate execution is a common research practice. In this paper we propose a hybrid approach that combines differential evolution mutation operators in an attempt to balance their exploration and exploitation capabilities. Additionally, a self-balancing hybrid mutation operator is presented, which favors the exploration of the search space during the first phase of the optimization, while later opts for the exploitation to aid convergence to the optimum. Extensive experimental results indicate that the proposed approaches effectively enhance DEpsilas ability to accurately locate solutions in the search space.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Non-Monotone Differential Evolution</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the 10th annual inproceedings on Genetic evolutionary computation, GECCO 2008</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2008</style></year></dates><publisher><style face="normal" font="default" size="100%">ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">New York, NY, USA</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-60558-130-9</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The Differential Evolution algorithm uses an elitist selection, constantly pushing the population in a strict downhill search, in an attempt to guarantee the conservation of the best individuals. However, when this operator is combined with an exploitive mutation operator can lead to premature convergence to an undesired region of attraction. To alleviate this problem, we propose the Non-Monotone Differential Evolution algorithm. To this end, we allow the best individual to perform some uphill movements, greatly enhancing the exploration of the search space. This approach further aids algorithm’s ability to escape undesired regions of the search space and improves its performance. The proposed approach utilizes already computed pieces of information and does not require extra function evaluations. Experimental results indicate that the proposed approach provides stable and reliable convergence.&quot; keywords = &quot;differential evolution, evolutionary algorithms, global optimization, non-monotone differential evolution</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">N. G. Pavlidis</style></author><author><style face="normal" font="default" size="100%">E. G. Pavlidis</style></author><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Computational Intelligence Algorithms For Risk-Adjusted Trading Strategies</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Congress on Evolutionary Computation, 2007. CEC 2007</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">computational intelligence algorithm</style></keyword><keyword><style  face="normal" font="default" size="100%">differential evolution</style></keyword><keyword><style  face="normal" font="default" size="100%">financial market</style></keyword><keyword><style  face="normal" font="default" size="100%">foreign exchange market</style></keyword><keyword><style  face="normal" font="default" size="100%">foreign exchange trading</style></keyword><keyword><style  face="normal" font="default" size="100%">generalized moving average rule</style></keyword><keyword><style  face="normal" font="default" size="100%">genetic algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">genetic programming</style></keyword><keyword><style  face="normal" font="default" size="100%">optimization</style></keyword><keyword><style  face="normal" font="default" size="100%">pattern detection</style></keyword><keyword><style  face="normal" font="default" size="100%">risk analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">risk-adjusted trading strategy</style></keyword><keyword><style  face="normal" font="default" size="100%">statistical testing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2007</style></year><pub-dates><date><style  face="normal" font="default" size="100%">September</style></date></pub-dates></dates><pub-location><style face="normal" font="default" size="100%">Singapore</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This paper investigates the performance of trading strategies identified through computational intelligence techniques. We focus on trading rules derived by genetic programming, as well as, generalized moving average rules optimized through differential evolution. The performance of these rules is investigated using recently proposed risk-adjusted evaluation measures and statistical testing is carried out through simulation. Overall, the moving average rules proved to be more robust, but genetic programming seems more promising in terms of generating higher profits and detecting novel patterns in the data.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Higher-Order Neural Networks Training Using Differential Evolution</style></title><secondary-title><style face="normal" font="default" size="100%">International Conference of Numerical Analysis and Applied Mathematics</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year></dates><publisher><style face="normal" font="default" size="100%">Wiley-VCH</style></publisher><pub-location><style face="normal" font="default" size="100%">Hersonissos, Crete, Greece</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M. G. Epitropakis</style></author><author><style face="normal" font="default" size="100%">V. P. Plagianakos</style></author><author><style face="normal" font="default" size="100%">M. N. Vrahatis</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Integer Weight Higher-Order Neural Network Training Using Distributed Differential Evolution</style></title><secondary-title><style face="normal" font="default" size="100%">International Conference of Computational Methods in Sciences and Engineering</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year></dates><publisher><style face="normal" font="default" size="100%">LSCCS</style></publisher><pub-location><style face="normal" font="default" size="100%">Crete, Greece</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language></record></records></xml>