Brice Pollock

Adventurer & Mountain Enthusiast

May 6, 2011

The computational time for optimization problems remain a major barrier for optimization implementation in complex real-world problems. Evolutionary algorithms have helped decrease computation time and improve solution accuracy for complex problems, but these programs can still require long computation times. As a solution, MPI was used to parallelize a genetic algorithm on a high performance machine and was tested using the continuous rosenbrock function, Figure 1.

The genetic algorithm implementation, written in C, utilized the open source genetic algorithm utility library (GAUL) and was developed on the redhat linux operating system. The first step was serial optimization, therefore genetic algorithm functions for genome selection, crossover, mutation and others were experimented with for the Rosenbrock function. This evolution implementation, combined with specific convergence criteria, resulted in a reduction of the number of iterations until solution from 12 – 16 thousand for the GAUL sample codes to under one thousand. Parallelization of the genetic algorithm resulted in a scalable solution with an average solution time **21 times faster than serial for sixteen processors**. Finally, utilizing the Jumpshot application, the MPI implementation was determined to have good load balancing among all processors and scalabile for increasing numbers of processors. A summary of results is shown in Table 1.