RUS  ENG
Full version
JOURNALS // Proceedings of the Institute for System Programming of the RAS // Archive

Proceedings of ISP RAS, 2021 Volume 33, Issue 5, Pages 167–180 (Mi tisp634)

Using the functional programming library for solving numerical problems on graphics accelerators with cuda technology

M. M. Krasnovab, O. B. Feodoritovaa

a Keldysh Institute of Applied Mathematics of Russian Academy of Sciences
b Moscow Institute of Physics and Technology

Abstract: Modern graphics accelerators (GPUs) can significantly speed up the execution of numerical tasks. However, porting programs to graphics accelerators is not an easy task. Sometimes the transfer of programs to such accelerators is carried out by almost completely rewriting them (for example, when using the OpenCL technology). This raises the daunting task of maintaining two independent source codes. However, CUDA graphics accelerators, thanks to technology developed by NVIDIA, allow you to have a single source code for both conventional processors (CPUs) and CUDA. The machine code generated when compiling this single text depends on which compiler it is compiled with (the usual one, such as gcc, icc and msvc, or the compiler for CUDA, nvcc). However, in this single source code, you need to somehow tell the compiler which parts of this code to parallelize on shared memory. For the CPU, this is usually done using OpenMP and special pragmas to the compiler. For CUDA, parallelization is done in a completely different way. The use of the functional programming library developed by the authors allows you to hide the use of one or another parallelization mechanism on shared memory within the library and make the user source code completely independent of the computing device used (CPU or CUDA). This article shows how this can be done.

Keywords: C ++, functional programming library, CUDA, OpenMP, OpenCL, OpenACC.

DOI: 10.15514/ISPRAS-2021-33(5)-10



© Steklov Math. Inst. of RAS, 2024