CompileTaskflowWithCUDA Compile Taskflow with CUDA Install CUDA Compiler CompileTaskflowWithCUDA_1InstallCUDACompiler Compile Source Code Directly CompileTaskflowWithCUDA_1CompileTaskflowWithCUDADirectly Compile Source Code Separately CompileTaskflowWithCUDA_1CompileTaskflowWithCUDASeparately Link Objects Using nvcc CompileTaskflowWithCUDA_1CompileTaskflowWithCUDANaiveLinking Link Objects Using Different Linkers CompileTaskflowWithCUDA_1CompileTaskflowWithCUDADifferentLinkers Install CUDA Compiler To compile Taskflow with CUDA code, you need a nvcc compiler. Please visit the official page of Downloading CUDA Toolkit. Compile Source Code Directly Taskflow's GPU programming interface for CUDA is tf::cudaFlow. Consider the following simple.cu program that launches a single kernel function to output a message: #include<taskflow/taskflow.hpp> #include<taskflow/cudaflow.hpp> #include<taskflow/cuda/for_each.hpp> intmain(intargc,constchar**argv){ tf::Executorexecutor; tf::Taskflowtaskflow; tf::Tasktask1=taskflow.emplace([](){}).name("cputask"); tf::Tasktask2=taskflow.emplace([](){ //createacudaFlowofasingle-threadedtask tf::cudaFlowcf; cf.single_task([]__device__(){printf("hellocudaFlow!\n");}); //launchthecudaflowthroughastream tf::cudaStreamstream; cf.run(stream); stream.synchronize(); }).name("gputask"); task1.precede(task2); executor.run(taskflow).wait(); return0; } The easiest way to compile Taskflow with CUDA code (e.g., cudaFlow, kernels) is to use nvcc: ~$nvcc-std=c++17-Ipath/to/taskflow/--extended-lambdasimple.cu-osimple ~$./simple hellocudaFlow! Compile Source Code Separately Large GPU applications often compile a program into separate objects and link them together to form an executable or a library. You can compile your CPU code and GPU code separately with Taskflow using nvcc and other compilers (such as g++ and clang++). Consider the following example that defines two tasks on two different pieces (main.cpp and cudaflow.cpp) of source code: //main.cpp #include<taskflow/taskflow.hpp> tf::Taskmake_cudaflow(tf::Taskflow&taskflow);//createacudaFlowtask intmain(){ tf::Executorexecutor; tf::Taskflowtaskflow; tf::Tasktask1=taskflow.emplace([](){std::cout<<"main.cpp!\n";}) .name("cputask"); tf::Tasktask2=make_cudaflow(taskflow); task1.precede(task2); executor.run(taskflow).wait(); return0; } //cudaflow.cpp #include<taskflow/taskflow.hpp> #include<taskflow/cudaflow.hpp> tf::Taskmake_cudaflow(tf::Taskflow&taskflow){ returntaskflow.emplace([](){ //createacudaFlowofasingle-threadedtask tf::cudaFlowcf; cf.single_task([]__device__(){printf("cudaflow.cpp!\n");}); //launchthecudaflowthroughastream tf::cudaStreamstream; cf.run(stream); stream.synchronize(); }).name("gputask"); } Compile each source to an object (g++ as an example): ~$g++-std=c++17-Ipath/to/taskflow-cmain.cpp-omain.o ~$nvcc-std=c++17--extended-lambda-xcu-Ipath/to/taskflow\ -dccudaflow.cpp-ocudaflow.o ~$ls #nowwehavethetwocompiled.oobjects,main.oandcudaflow.o main.ocudaflow.o The extended-lambda option tells nvcc to generate GPU code for the lambda defined with device. The -x cu tells nvcc to treat the input files as .cu files containing both CPU and GPU code. By default, nvcc treats .cpp files as CPU-only code. This option is required to have nvcc generate device code here, but it is also a handy way to avoid renaming source files in larger projects. The –dc option tells nvcc to generate device code for later linking. You may also need to specify the target architecture to tell nvcc to target on a compatible SM architecture using the option -arch. For instance, the following command requires device code linking to have compute capability 7.5 or later: ~$nvcc-std=c++17--extended-lambda-xcu-arch=sm_75-Ipath/to/taskflow\ -dccudaflow.cpp-ocudaflow.o Link Objects Using nvcc Using nvcc to link compiled object code is nothing special but replacing the normal compiler with nvcc and it takes care of all the necessary steps: ~$nvccmain.ocudaflow.o-omain #runthemainprogram ~$./main main.cpp! cudaflow.cpp! Link Objects Using Different Linkers You can choose to use a compiler other than nvcc for the final link step. Since your CPU compiler does not know how to link CUDA device code, you have to add a step in your build to have nvcc link the CUDA device code, using the option -dlink: ~$nvcc-ogpuCode.o-dlinkmain.ocudaflow.o This step links all the device object code and places it into gpuCode.o. Note that this step does not link the CPU object code and discards the CPU object code in main.o and cudaflow.o. To complete the link to an executable, you can use, for example, ld or g++. #replace/usr/local/cuda/lib64withyourownCUDAlibraryinstallationpath ~$g++-pthread-L/usr/local/cuda/lib64/-lcudart\ gpuCode.omain.ocudaflow.o-omain #runthemainprogram ~$./main main.cpp! cudaflow.cpp! We give g++ all of the objects again because it needs the CPU object code, which is not in gpuCode.o. The device code stored in the original objects, main.o and cudaflow.o, does not conflict with the code in gpuCode.o. g++ ignores device code because it does not know how to link it, and the device code in gpuCode.o is already linked and ready to go. This intentional ignorance is extremely useful in large builds where intermediate objects may have both CPU and GPU code. In this case, we just let the GPU and CPU linkers each do its own job, noting that the CPU linker is always the last one we run. The CUDA Runtime API library is automatically linked when we use nvcc for linking, but we must explicitly link it (-lcudart) when using another linker.