Distributed Evolutionary Algorithms in Python

DEAP

DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelization mechanism such as multiprocessing and SCOOP.

DEAP includes the following features:

  • Genetic algorithm using any imaginable representation
    • List, Array, Set, Dictionary, Tree, Numpy Array, etc.
  • Genetic programing using prefix trees
    • Loosely typed, Strongly typed
    • Automatically defined functions
  • Evolution strategies (including CMA-ES)
  • Multi-objective optimisation (NSGA-II, SPEA2, MO-CMA-ES)
  • Co-evolution (cooperative and competitive) of multiple populations
  • Parallelization of the evaluations (and more)
  • Hall of Fame of the best individuals that lived in the population
  • Checkpoints that take snapshots of a system regularly
  • Benchmarks module containing most common test functions
  • Genealogy of an evolution (that is compatible with NetworkX)
  • Examples of alternative algorithms : Particle Swarm Optimization, Differential Evolution, Estimation of Distribution Algorithm

See the DEAP User’s Guide for DEAP documentation.

Installation

We encourage you to use easy_install or pip to install DEAP on your system. Other installation procedure like apt-get, yum, etc. usually provide an outdated version.

pip install deap

The latest version can be installed with

pip install git+https://github.com/DEAP/deap@master

If you wish to build from sources, download or clone the repository and type

python setup.py install

 

Source

#StackBounty: #neural-networks #genetic-algorithms Is the NEAT algorithm unbalanced?

Bounty: 50

So I’m developing an implementation of the NEAT algorithm. I understand how it works. But during my testing phase I saw something interesting, related to this quote:

In the add node mutation an existing connection is split and the new node placed where the old connection used to be.

Clear. Let’s assume we have a network without any hidden nodes. So two inputs, two outputs. This means a total amount of 4 connections.

Let’s add a node. It splits a connection into two connections. This has an effect on only one of the outputs. So there are now a total of 3 connections linked with one output, while only 2 linked to the other.

Let’s add a node again. It has a 3/5 chance of changing a link to one output, while only a 2/5 chance of changing a link to one output.

If you would only add nodes until eternity, you would find that only a small amount of nodes have effect on one of the outputs, while a large amount has effect on the other.


Question: It is obvious that only the add node mutation is unbalanced. But does the add connection mutation neutralise this unbalance?


Get this bounty!!!

#StackBounty: #regression #weighted-regression #weighted-sampling #weighted-data #genetic-algorithms Can someone point me towards resea…

Bounty: 100

I am working on Fitness case importance for Symbolic Regression and found a Paper “Step-wise Adaptation of Weights for Symbolic Regression with Genetic Programming” which talks about weights of fitness cases to give importance to points which are not evolved to boost performance and also get GP out of local optima.

This publication is too old and i am looking for new work which talk about fitness cases importance. But i am not able to find any such publication. Instead i find Publications related to sampling on Random selection in different ways.

So, Can someone point me towards research works relevant to Importance or Weighting Datapoints like SAW(Stepwise adaptation of weights) technique?

Thank you.


Get this bounty!!!