GPUTaskingcudaFlow GPU Tasking (cudaFlow) Include the Header GPUTaskingcudaFlow_1GPUTaskingcudaFlowIncludeTheHeader What is a CUDA Graph? GPUTaskingcudaFlow_1WhatIsACudaGraph Create a cudaFlow GPUTaskingcudaFlow_1Create_a_cudaFlow Compile a cudaFlow Program GPUTaskingcudaFlow_1Compile_a_cudaFlow_program Run a cudaFlow on Specific GPU GPUTaskingcudaFlow_1run_a_cudaflow_on_a_specific_gpu Create Memory Operation Tasks GPUTaskingcudaFlow_1GPUMemoryOperations Offload a cudaFlow GPUTaskingcudaFlow_1OffloadAcudaFlow Update a cudaFlow GPUTaskingcudaFlow_1UpdateAcudaFlow Integrate a cudaFlow into Taskflow GPUTaskingcudaFlow_1IntegrateCudaFlowIntoTaskflow Modern scientific computing typically leverages GPU-powered parallel processing cores to speed up large-scale applications. This chapter discusses how to implement CPU-GPU heterogeneous tasking algorithms with Nvidia CUDA. Include the Header You need to include the header file, taskflow/cuda/cudaflow.hpp, for creating a GPU task graph using tf::cudaFlow. #include<taskflow/cuda/cudaflow.hpp> What is a CUDA Graph? CUDA Graph is a new execution model that enables a series of CUDA kernels to be defined and encapsulated as a single unit, i.e., a task graph of operations, rather than a sequence of individually-launched operations. This organization allows launching multiple GPU operations through a single CPU operation and hence reduces the launching overheads, especially for kernels of short running time. The benefit of CUDA Graph can be demonstrated in the figure below: In this example, a sequence of short kernels is launched one-by-one by the CPU. The CPU launching overhead creates a significant gap in between the kernels. If we replace this sequence of kernels with a CUDA graph, initially we will need to spend a little extra time on building the graph and launching the whole graph in one go on the first occasion, but subsequent executions will be very fast, as there will be very little gap between the kernels. The difference is more pronounced when the same sequence of operations is repeated many times, for example, many training epochs in machine learning workloads. In that case, the initial costs of building and launching the graph will be amortized over the entire training iterations. A comprehensive introduction about CUDA Graph can be referred to the CUDA Graph Programming Guide. Create a cudaFlow Taskflow leverages CUDA Graph to enable concurrent CPU-GPU tasking using a task graph model called tf::cudaFlow. A cudaFlow manages a CUDA graph explicitly to execute dependent GPU operations in a single CPU call. The following example implements a cudaFlow that performs an saxpy (A·X Plus Y) workload: #include<taskflow/cuda/cudaflow.hpp> //saxpy(single-precisionA·XPlusY)kernel __global__voidsaxpy(intn,floata,float*x,float*y){ inti=blockIdx.x*blockDim.x+threadIdx.x; if(i<n){ y[i]=a*x[i]+y[i]; } } //mainfunctionbegins intmain(){ constunsignedN=1<<20;//sizeofthevector std::vector<float>hx(N,1.0f);//xvectorathost std::vector<float>hy(N,2.0f);//yvectorathost float*dx{nullptr};//xvectoratdevice float*dy{nullptr};//yvectoratdevice cudaMalloc(&dx,N*sizeof(float)); cudaMalloc(&dy,N*sizeof(float)); tf::cudaFlowcudaflow; //createdatatransfertasks tf::cudaTaskh2d_x=cudaflow.copy(dx,hx.data(),N).name("h2d_x"); tf::cudaTaskh2d_y=cudaflow.copy(dy,hy.data(),N).name("h2d_y"); tf::cudaTaskd2h_x=cudaflow.copy(hx.data(),dx,N).name("d2h_x"); tf::cudaTaskd2h_y=cudaflow.copy(hy.data(),dy,N).name("d2h_y"); //launchsaxpy<<<(N+255)/256,256,0>>>(N,2.0f,dx,dy) tf::cudaTaskkernel=cudaflow.kernel( (N+255)/256,256,0,saxpy,N,2.0f,dx,dy ).name("saxpy"); kernel.succeed(h2d_x,h2d_y) .precede(d2h_x,d2h_y); //runthecudaflowthroughastream tf::cudaStreamstream; cudaflow.run(stream) stream.synchronize(); //dumpthecudaflow cudaflow.dump(std::cout); } The cudaFlow graph consists of two CPU-to-GPU data copies (h2d_x and h2d_y), one kernel (saxpy), and two GPU-to-CPU data copies (d2h_x and d2h_y), in this order of their task dependencies. We do not expend yet another effort on simplifying kernel programming but focus on tasking CUDA operations and their dependencies. In other words, tf::cudaFlow is a lightweight C++ abstraction over CUDA Graph. This organization lets users fully take advantage of CUDA features that are commensurate with their domain knowledge, while leaving difficult task parallelism details to Taskflow. Compile a cudaFlow Program Use nvcc to compile a cudaFlow program: ~$nvcc-std=c++17my_cudaflow.cu-Ipath/to/include/taskflow-O2-omy_cudaflow ~$./my_cudaflow Please visit the page Compile Taskflow with CUDA for more details. Run a cudaFlow on Specific GPU By default, a cudaFlow runs on the current GPU context associated with the caller, which is typically GPU 0. Each CUDA GPU has an integer identifier in the range of [0, N) to represent the context of that GPU, where N is the number of GPUs in the system. You can run a cudaFlow on a specific GPU by switching the context to a different GPU using tf::cudaScopedDevice. The code below creates a cudaFlow and runs it on GPU 2. { //createanRAII-styledswitchertothecontextofGPU2 tf::cudaScopedDevicecontext(2); //createacudaFlowcapturerunderGPU2 tf::cudaFlowCapturercapturer; //... //createastreamunderGPU2andoffloadthecapturertothatGPU tf::cudaStreamstream; capturer.run(stream); stream.synchronize(); } tf::cudaScopedDevice is an RAII-styled wrapper to perform scoped switch to the given GPU context. When the scope is destroyed, it switches back to the original context. tf::cudaScopedDevice allows you to place a cudaFlow on a particular GPU device, but it is your responsibility to ensure correct memory access. For example, you may not allocate a memory block on GPU 2 while accessing it from a kernel on GPU 0. An easy practice for multi-GPU programming is to allocate unified shared memory using cudaMallocManaged and let the CUDA runtime perform automatic memory migration between GPUs. Create Memory Operation Tasks cudaFlow provides a set of methods for users to manipulate device memory. There are two categories, raw data and typed data. Raw data operations are methods with prefix mem, such as memcpy and memset, that operate in bytes. Typed data operations such as copy, fill, and zero, take logical count of elements. For instance, the following three methods have the same result of zeroing sizeof(int)*count bytes of the device memory area pointed to by target. int*target; cudaMalloc(&target,count*sizeof(int)); tf::cudaFlowcudaflow; memset_target=cudaflow.memset(target,0,sizeof(int)*count); same_as_above=cudaflow.fill(target,0,count); same_as_above_again=cudaflow.zero(target,count); The method tf::cudaFlow::fill is a more powerful variant of tf::cudaFlow::memset. It can fill a memory area with any value of type T, given that sizeof(T) is 1, 2, or 4 bytes. The following example creates a GPU task to fill count elements in the array target with value 1234. cf.fill(target,1234,count); Similar concept applies to tf::cudaFlow::memcpy and tf::cudaFlow::copy as well. The following two methods are equivalent to each other. cudaflow.memcpy(target,source,sizeof(int)*count); cudaflow.copy(target,source,count); Offload a cudaFlow To offload a cudaFlow to a GPU, you need to use tf::cudaFlow::run and pass a tf::cudaStream created on that GPU. The run method is asynchronous and can be explicitly synchronized through the given stream. tf::cudaStreamstream; //launchacudaflowasynchronouslythroughastream cudaflow.run(stream); //waitforthecudaflowtofinish stream.synchronize(); When you offload a cudaFlow using tf::cudaFlow::run, the runtime transforms that cudaFlow (i.e., application GPU task graph) into a native executable instance and submit it to the CUDA runtime for execution. There is always an one-to-one mapping between cudaFlow and its native CUDA graph representation (except those constructed by using tf::cudaFlowCapturer). Update a cudaFlow Many GPU applications require you to launch a cudaFlow multiple times and update node parameters (e.g., kernel parameters and memory addresses) between iterations. cudaFlow allows you to update the parameters of created tasks and run the updated cudaFlow with new parameters. Every task-creation method in tf::cudaFlow has an overload to update the parameters of a created task by that method. tf::cudaStreamstream; tf::cudaFlowcf; //createakerneltask tf::cudaTasktask=cf.kernel(grid1,block1,shm1,kernel,kernel_args_1); cf.run(stream); stream.synchronize(); //updatethecreatedkerneltaskwithdifferentparameters cf.kernel(task,grid2,block2,shm2,kernel,kernel_args_2); cf.run(stream); stream.synchronize(); Between successive offloads (i.e., iterative executions of a cudaFlow), you can ONLY update task parameters, such as changing the kernel execution parameters and memory operation parameters. However, you must NOT change the topology of the cudaFlow, such as adding a new task or adding a new dependency. This is the limitation of CUDA Graph. There are a few restrictions on updating task parameters in a cudaFlow. Notably, you must NOT change the topology of an offloaded graph. In addition, update methods have the following limitations: kernel task The kernel function is not allowed to change. This restriction applies to all algorithm tasks that are created using lambda. memset and memcpy tasks: The CUDA device(s) to which the operand(s) was allocated/mapped cannot change The source/destination memory must be allocated from the same contexts as the original source/destination memory. Integrate a cudaFlow into Taskflow You can create a task to enclose a cudaFlow and run it from a worker thread. The usage of the cudaFlow remains the same except that the cudaFlow is run by a worker thread from a taskflow task. The following example runs a cudaFlow from a static task: tf::Executorexecutor; tf::Taskflowtaskflow; taskflow.emplace([](){ //createacudaFlowinsideastatictask tf::cudaFlowcudaflow; //...createakerneltask cudaflow.kernel(...); //runthecapturerthroughastream tf::cudaStreamstream; capturer.run(stream); stream.synchronize(); });