matrix_multiplication_cudaflow Matrix Multiplication (cudaFlow) Define a Matrix Multiplication Kernel matrix_multiplication_cudaflow_1GPUAcceleratedMatrixMultiplication Define a cudaFlow for Matrix Multiplication matrix_multiplication_cudaflow_1DefineAcudaFlowForMatrixMultiplication Benchmarking matrix_multiplication_cudaflow_1MatrixMultiplicationcudaFlowBenchmarking Following up on Matrix Multiplication, this page studies how to accelerate a matrix multiplication workload on a GPU using tf::cudaFlow. Define a Matrix Multiplication Kernel GPU can perform a lot of parallel computations more than CPUs. It is especially useful for data-intensive computing such as matrix multiplication. With GPU, we express the parallel patterns at a fine-grained level. The kernel, written in CUDA, is described as follows: //CUDAkerneltoperformmatrixmultiplication __global__voidmatmul(int*A,int*B,int*C,intM,intK,intN){ introw=blockIdx.y*blockDim.y+threadIdx.y; intcol=blockIdx.x*blockDim.x+threadIdx.x; intsum=0; if(col<N&&row<M){ for(inti=0;i<K;i++){ sum+=a[row*K+i]*b[i*N+col]; } c[row*N+col]=sum; } } Each CUDA thread corresponds to an element of C and compute its result. Instead of storing each matrix in a 2D array, we use 1D layout to ease the data transfer between CPU and GPU. In a row-major layout, an element (x, y) in the 2D matrix can be addressed at x * width + y in the transformed 1D layout. Define a cudaFlow for Matrix Multiplication The next step is to allocate memory for A, B, and C at a GPU. We create three tasks each calling cudaMalloc to allocate space for one matrix. Then, we create a cudaFlow to offload matrix multiplication to a GPU. The entire code is described as follows: voidmatrix_multiplication(int*A,int*B,int*C,intM,intK,intN){ tf::Taskflowtaskflow; tf::Executorexecutor; //allocatethehostandgpustorageforA tf::Taskallocate_a=taskflow.emplace([&](){ cudaMalloc(&da,M*K*sizeof(int)); }).name("allocate_a"); //allocatethehostandgpustorageforB tf::Taskallocate_b=taskflow.emplace([&](){ cudaMalloc(&db,K*N*sizeof(int)); }).name("allocate_b"); //allocatethehostandgpustorageforC tf::Taskallocate_c=taskflow.emplace([&](){ cudaMalloc(&dc,M*N*sizeof(int)); }).name("allocate_c"); //createacudaFlowtasktorunthematrixmultiplication tf::TaskcudaFlow=taskflow.emplace([&](){ tf::cudaFlowcf; //copydatatoda,db,anddc tf::cudaTaskcopy_da=cf.copy(da,A,M*K).name("H2D_A"); tf::cudaTaskcopy_db=cf.copy(db,B,K*N).name("H2D_B"); tf::cudaTaskcopy_hc=cf.copy(C,dc,M*N).name("D2H_C"); dim3grid((K+16-1)/16,(M+16-1)/16); dim3block(16,16); tf::cudaTaskkmatmul=cf.kernel(grid,block,0,matmul,da,db,dc,M,K,N) .name("matmul"); kmatmul.succeed(copy_da,copy_db) .precede(copy_hc); //launchthecudaFlow tf::cudaStreamstream; cf.run(stream); stream.synchronize(); }).name("cudaFlow"); //freethegpustorage autofree=taskflow.emplace([&](){ cudaFree(da); cudaFree(db); cudaFree(dc); }).name("free"); //createdependency cudaFlow.succeed(allocate_a,allocate_b,allocate_c) .precede(free); //dumpthegraphwithoutunfoldingthecudaFlow taskflow.dump(std::cout); //runthetaskflow executor.run(taskflow).wait(); //dumptheentireexecutiongraphincludingunfoldedcudaFlow taskflow.dump(std::cout); } Within the cudaFlow, we create two host-to-device (H2D) tasks that copy data from A and B to da and db, one device-to-host (D2H) task that copies the result from dc to C, and one kernel task that launches matmul on the GPU (by default, GPU 0). H2D tasks precede the kernel and the kernel precedes the D2H task. These GPU operations form a GPU task graph managed by a cudaFlow. The first dump of the taskflow gives the following graph: A cudaFlow encapsulates a GPU task dependency graph similar to a tf::Subflow (see Subflow Tasking). In order to visualize it, we need to execute the graph first and then dump the taskflow. Benchmarking We run three versions of matrix multiplication, sequential CPU, parallel CPUs, and one GPU, on a machine of 12 Intel i7-8700 CPUs at 3.20 GHz and a Nvidia RTX 2080 GPU using various matrix sizes of A, B, and C. A B C CPU Sequential CPU Parallel GPU Parallel 10x10 10x10 10x10 0.142 ms 0.414 ms 82 ms 100x100 100x100 100x100 1.641 ms 0.733 ms 83 ms 1000x1000 1000x1000 1000x1000 1532 ms 504 ms 85 ms 2000x2000 2000x2000 2000x2000 25688 ms 4387 ms 133 ms 3000x3000 3000x3000 3000x3000 104838 ms 16170 ms 214 ms 4000x4000 4000x4000 4000x4000 250133 ms 39646 ms 427 ms
As the matrix size increases, the speed-up of GPU over CPUs becomes prominent. For example, at 4000x4000, the GPU runtime is 585.8 times faster than the sequential CPU runtime and is 92.8 times faster than the parallel CPU solutions.