High Performance Computing

May 6, 2011

The computational time for optimization problems remain a major barrier for optimization implementation in complex real-world problems. Evolutionary algorithms have helped decrease computation time and improve solution accuracy for complex problems, but these programs can still require long computation times. As a solution, MPI was used to parallelize a genetic algorithm on a high performance machine and was tested using the continuous rosenbrock function, Figure 1.

Figure 1: Rosenbrock Function


The genetic algorithm implementation, written in C, utilized the open source genetic algorithm utility library (GAUL) and was developed on the redhat linux operating system. The first step was serial optimization, therefore genetic algorithm functions for genome selection, crossover, mutation and others were experimented with for the Rosenbrock function. This evolution implementation, combined with specific convergence criteria, resulted in a reduction of the number of iterations until solution from 12 – 16 thousand for the GAUL sample codes to under one thousand. Parallelization of the genetic algorithm resulted in a scalable solution with an average solution time 21 times faster than serial for sixteen processors. Finally, utilizing the Jumpshot application, the MPI implementation was determined to have good load balancing among all processors and scalabile for increasing numbers of processors. A summary of results is shown in Table 1.

Table 1: Solution MPI Timing Results. Iterations indicates the number of iterations until a solution is reached, solution fitness indicates the final solution of optimization function, and computation time indicates the number of seconds taken to reach that solution. One processor represents serial outcome while other results are of the parallelized genetic algorithm routine. Optimum for Rosenbrock Function located at F(1,1) = 0.