Open Access Open Access  Restricted Access Subscription Access

A Comparative Evaluation of the GPU Vs. the CPU for Parallelization of Evolutionary Algorithms through Multiple Independent Runs


Affiliations
1 University of Skovde, Department of Engineering Science, Skovde, Sweden
 

Multiple independent runs of an evolutionary algorithm in parallel are often used to increase the efficiency of parameter tuning or to speed up optimizations involving inexpensive fitness functions. A GPU platform is commonly adopted in the research community to implement parallelization, and this platform has been shown to be superior to the traditional CPU platform in many previous studies. However, it is not clear how efficient the GPU is in comparison with the CPU for the parallelizing multiple independent runs, as the vast majority of the previous studies focus on parallelization approaches in which the parallel runs are dependent on each other (such as master-slave, coarse-grained or fine-grained approaches). This study therefore aims to investigate the performance of the GPU in comparison with the CPU in the context of multiple independent runs in order to provide insights into which platform is most efficient. This is done through a number of experiments that evaluate the efficiency of the GPU versus the CPU in various scenarios. An analysis of the results shows that the GPU is powerful, but that there are scenarios where the CPU outperforms the GPU. This means that a GPU is not the universally best option for parallelizing multiple independent runs and that the choice of computation platform therefore should be an informed decision. To facilitate this decision and improve the efficiency of optimizations involving multiple independent runs, the paper provides a number of recommendations for when and how to use the GPU.

Keywords

Evolutionary Algorithms, Parallelization, Multiple Independent Runs,GPU, CPU.
User
Notifications
Font Size

  • Sudholt D (2015) Parallel evolutionary algorithms. In: Springer Handbook of Computational Intelligence (eds. Janusz Kacprzyk and Witold Pedrycz), pp 929–959, Springer, Berlin Heidelberg. doi: 10.1007/978-3-662-43505-2_46
  • Tsutsui S and Fujimoto N (2013) An analytical study of parallel GA with independent runs on GPUs. In Massively Parallel Evolutionary Computation on GPGPUs, Part of the Natural Computing Series,Springer-Verlag Berlin Heidelberg, pp 105–120
  • Cantu-Paz E and Goldberg D (2003) Are multiple runs of genetic algorithms better than one? In: Proceedings of Genetic and Evolutionary Computation Conference, pp 801–812
  • Crepinsek M, Liu S-H, Mernik M (2013) Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput Surv 45(3):1–33, http://dx.doi.org/10.1145/2480741.2480752
  • Smit S (2010) Parameter tuning of evolutionary algorithms. In: Applications of Evolutionary Computation. Springer, Berlin Heidelberg, pp 542–551
  • Eiben A and Smit S (2011) Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm and Evolutionary Computation 1(1):19–31, http://dx.doi.org/10.1016/j.swevo.2011.02.001
  • Collet P (2013) Why GPGPUs for evolutionary computation? In: Massively Parallel Evolutionary Computation on GPGPUs, Natural Computing Series, pp 3–14. Springer, Berlin Heidelberg. ISBN: 978-3-642-37958-1
  • Hofmann J, Limmer S, Fey D (2013) Performance investigations of genetic algorithms on graphics cards.Swarm and Evolutionary Computation 12:33–47. http://dx.doi.org/10.1016/j.swevo.2013.04.003
  • Tsutsui S and Collet P (eds) (2013) Massively Parallel Evolutionary Computation on GPGPUs, Natural Computing Series, Springer, Berlin Heidelberg. ISBN: 978-3-642-37958-1
  • Pospichal P, Jaros J, Schwarz J (2010) Parallel genetic algorithm on the CUDA architecture. In: Applications of Evolutionary Computation. Springer-Verlag Berlin, Heidelberg, pp 426–435
  • Lee VW, Hammarlund P, Singhal R, Deisher M (2010) Debunking the 100X GPU vs. CPU myth. In: Proceedings of the 37th Annual International Symposium on Computer Architecture, ISCA 2010, p 451. ACM Press, New York
  • Wahib M, Munawar A, Munetomo M, Akama K (2011) Optimization of parallel genetic algorithms for nVidia GPUs. New Orleans, LA, IEEE Congress on Evolutionary Computation. doi: 10.1109/CEC.2011.5949701
  • Hennessy J and Patterson D (2011) Computer Architecture: A Quantitative Approach. Fifth Edition. Morgan Kaufmann Publishers, San Francisco. doi: 012383872X, 9780123838728
  • Fernando R (2004) GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics. Pearson Higher Education.
  • Fei X, Li K, Yang W, Li K (2016) CPU-GPU computing: Overview, optimization, and applications, In: Innovative Research and Applications in Next-Generation High Performance Computing, IGI Global. doi: 10.4018/978-1-5225-0287-6.ch007
  • NVIDIA (2017) CUDA C Programming Guide. http://docs.nvidia.com/cuda/cuda-c-programming-guide/ index.html. Accessed 10January 2017.
  • Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multi-objective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation 2:182–197
  • Deb K (2008) Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons, Chichester
  • Jensen M (2003) Reducing the run-time complexity of multiobjective EAs: The NSGA-II and other algorithms. IEEE Transactions on Evolutionary Computation7(5)
  • Deb K(1999) Multi-objective genetic algorithms: Problem difficulties and construction of test problems. Evolutionary Computation Journal 7(3):205–230
  • Zitzler E, Deb K, Thiele L(2000) Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation Journal 8(2):125–148
  • Fu M (2015) Handbook of Simulation Optimization. Springer, New York. ISBN 978-1-4939-1383-1

Abstract Views: 587

PDF Views: 197




  • A Comparative Evaluation of the GPU Vs. the CPU for Parallelization of Evolutionary Algorithms through Multiple Independent Runs

Abstract Views: 587  |  PDF Views: 197

Authors

Anna Syberfeldt
University of Skovde, Department of Engineering Science, Skovde, Sweden
Tom Ekblom
University of Skovde, Department of Engineering Science, Skovde, Sweden

Abstract


Multiple independent runs of an evolutionary algorithm in parallel are often used to increase the efficiency of parameter tuning or to speed up optimizations involving inexpensive fitness functions. A GPU platform is commonly adopted in the research community to implement parallelization, and this platform has been shown to be superior to the traditional CPU platform in many previous studies. However, it is not clear how efficient the GPU is in comparison with the CPU for the parallelizing multiple independent runs, as the vast majority of the previous studies focus on parallelization approaches in which the parallel runs are dependent on each other (such as master-slave, coarse-grained or fine-grained approaches). This study therefore aims to investigate the performance of the GPU in comparison with the CPU in the context of multiple independent runs in order to provide insights into which platform is most efficient. This is done through a number of experiments that evaluate the efficiency of the GPU versus the CPU in various scenarios. An analysis of the results shows that the GPU is powerful, but that there are scenarios where the CPU outperforms the GPU. This means that a GPU is not the universally best option for parallelizing multiple independent runs and that the choice of computation platform therefore should be an informed decision. To facilitate this decision and improve the efficiency of optimizations involving multiple independent runs, the paper provides a number of recommendations for when and how to use the GPU.

Keywords


Evolutionary Algorithms, Parallelization, Multiple Independent Runs,GPU, CPU.

References