243 lines
36 KiB
HTML
243 lines
36 KiB
HTML
<!DOCTYPE html>
|
|
<html lang="en">
|
|
<head>
|
|
<meta charset="UTF-8" />
|
|
<title>CUDA Standard Algorithms » Parallel Reduction | Taskflow QuickStart</title>
|
|
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:400,400i,600,600i%7CSource+Code+Pro:400,400i,600" />
|
|
<link rel="stylesheet" href="m-dark+documentation.compiled.css" />
|
|
<link rel="icon" href="favicon.ico" type="image/vnd.microsoft.icon" />
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
|
<meta name="theme-color" content="#22272e" />
|
|
</head>
|
|
<body>
|
|
<header><nav id="navigation">
|
|
<div class="m-container">
|
|
<div class="m-row">
|
|
<span id="m-navbar-brand" class="m-col-t-8 m-col-m-none m-left-m">
|
|
<a href="https://taskflow.github.io"><img src="taskflow_logo.png" alt="" />Taskflow</a> <span class="m-breadcrumb">|</span> <a href="index.html" class="m-thin">QuickStart</a>
|
|
</span>
|
|
<div class="m-col-t-4 m-hide-m m-text-right m-nopadr">
|
|
<a href="#search" class="m-doc-search-icon" title="Search" onclick="return showSearch()"><svg style="height: 0.9rem;" viewBox="0 0 16 16">
|
|
<path id="m-doc-search-icon-path" d="m6 0c-3.31 0-6 2.69-6 6 0 3.31 2.69 6 6 6 1.49 0 2.85-0.541 3.89-1.44-0.0164 0.338 0.147 0.759 0.5 1.15l3.22 3.79c0.552 0.614 1.45 0.665 2 0.115 0.55-0.55 0.499-1.45-0.115-2l-3.79-3.22c-0.392-0.353-0.812-0.515-1.15-0.5 0.895-1.05 1.44-2.41 1.44-3.89 0-3.31-2.69-6-6-6zm0 1.56a4.44 4.44 0 0 1 4.44 4.44 4.44 4.44 0 0 1-4.44 4.44 4.44 4.44 0 0 1-4.44-4.44 4.44 4.44 0 0 1 4.44-4.44z"/>
|
|
</svg></a>
|
|
<a id="m-navbar-show" href="#navigation" title="Show navigation"></a>
|
|
<a id="m-navbar-hide" href="#" title="Hide navigation"></a>
|
|
</div>
|
|
<div id="m-navbar-collapse" class="m-col-t-12 m-show-m m-col-m-none m-right-m">
|
|
<div class="m-row">
|
|
<ol class="m-col-t-6 m-col-m-none">
|
|
<li><a href="pages.html">Handbook</a></li>
|
|
<li><a href="namespaces.html">Namespaces</a></li>
|
|
</ol>
|
|
<ol class="m-col-t-6 m-col-m-none" start="3">
|
|
<li><a href="annotated.html">Classes</a></li>
|
|
<li><a href="files.html">Files</a></li>
|
|
<li class="m-show-m"><a href="#search" class="m-doc-search-icon" title="Search" onclick="return showSearch()"><svg style="height: 0.9rem;" viewBox="0 0 16 16">
|
|
<use href="#m-doc-search-icon-path" />
|
|
</svg></a></li>
|
|
</ol>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</nav></header>
|
|
<main><article>
|
|
<div class="m-container m-container-inflatable">
|
|
<div class="m-row">
|
|
<div class="m-col-l-10 m-push-l-1">
|
|
<h1>
|
|
<span class="m-breadcrumb"><a href="cudaStandardAlgorithms.html">CUDA Standard Algorithms</a> »</span>
|
|
Parallel Reduction
|
|
</h1>
|
|
<nav class="m-block m-default">
|
|
<h3>Contents</h3>
|
|
<ul>
|
|
<li><a href="#CUDASTDParallelReductionIncludeTheHeader">Include the Header</a></li>
|
|
<li><a href="#CUDASTDReduceItemsWithAnInitialValue">Reduce a Range of Items with an Initial Value</a></li>
|
|
<li><a href="#CUDASTDReduceItemsWithoutAnInitialValue">Reduce a Range of Items without an Initial Value</a></li>
|
|
<li><a href="#CUDASTDReduceTransformedItemsWithAnInitialValue">Reduce a Range of Transformed Items with an Initial Value</a></li>
|
|
<li><a href="#CUDASTDReduceTransformedItemsWithoutAnInitialValue">Reduce a Range of Transformed Items without an Initial Value</a></li>
|
|
</ul>
|
|
</nav>
|
|
<p>Taskflow provides standard template methods for reducing a range of items on a CUDA GPU.</p><section id="CUDASTDParallelReductionIncludeTheHeader"><h2><a href="#CUDASTDParallelReductionIncludeTheHeader">Include the Header</a></h2><p>You need to include the header file, <code>taskflow/cuda/algorithm/reduce.hpp</code>, for using the parallel-reduction algorithm.</p><pre class="m-code"><span class="cp">#include</span><span class="w"> </span><span class="cpf"><taskflow/cuda/algorithm/reduce.hpp></span></pre></section><section id="CUDASTDReduceItemsWithAnInitialValue"><h2><a href="#CUDASTDReduceItemsWithAnInitialValue">Reduce a Range of Items with an Initial Value</a></h2><p><a href="namespacetf.html#a8a872d2a0ac73a676713cb5be5aa688c" class="m-doc">tf::<wbr />cuda_reduce</a> performs a parallel reduction over a range of elements specified by <code>[first, last)</code> using the binary operator <code>bop</code> and stores the reduced result in <code>result</code>. It represents the parallel execution of the following reduction loop on a GPU:</p><pre class="m-code"><span class="k">while</span><span class="w"> </span><span class="p">(</span><span class="n">first</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="n">last</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
|
|
<span class="w"> </span><span class="o">*</span><span class="n">result</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">bop</span><span class="p">(</span><span class="o">*</span><span class="n">result</span><span class="p">,</span><span class="w"> </span><span class="o">*</span><span class="n">first</span><span class="o">++</span><span class="p">);</span>
|
|
<span class="p">}</span></pre><p>The variable <code>result</code> participates in the reduction loop and must be initialized with an initial value. The following code performs a parallel reduction to sum all the numbers in the given range with an initial value <code>1000</code>:</p><pre class="m-code"><span class="k">const</span><span class="w"> </span><span class="kt">size_t</span><span class="w"> </span><span class="n">N</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1000000</span><span class="p">;</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">res</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="mi">1</span><span class="p">);</span><span class="w"> </span><span class="c1">// result</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span><span class="w"> </span><span class="c1">// vector</span>
|
|
|
|
<span class="c1">// initializes the data</span>
|
|
<span class="o">*</span><span class="n">res</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1000</span><span class="p">;</span>
|
|
<span class="k">for</span><span class="p">(</span><span class="kt">size_t</span><span class="w"> </span><span class="n">i</span><span class="o">=</span><span class="mi">0</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o"><</span><span class="n">N</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o">++</span><span class="p">)</span><span class="w"> </span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">i</span><span class="p">;</span>
|
|
<span class="p">}</span><span class="w"> </span>
|
|
|
|
<span class="c1">// create an execution policy</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaStream</span><span class="w"> </span><span class="n">stream</span><span class="p">;</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span><span class="w"> </span><span class="nf">policy</span><span class="p">(</span><span class="n">stream</span><span class="p">);</span>
|
|
|
|
<span class="c1">// queries the required buffer size to reduce N elements using the given policy</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">bytes</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">policy</span><span class="p">.</span><span class="n">reduce_bufsz</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">buffer</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_device</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">byte</span><span class="o">></span><span class="p">(</span><span class="n">bytes</span><span class="p">);</span>
|
|
|
|
<span class="c1">// *res = 1000 + (0 + 1 + 2 + 3 + 4 + ... + N-1)</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cuda_reduce</span><span class="p">(</span><span class="n">policy</span><span class="p">,</span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">,</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">N</span><span class="p">,</span><span class="w"> </span><span class="n">res</span><span class="p">,</span><span class="w"> </span><span class="p">[]</span><span class="w"> </span><span class="n">__device__</span><span class="w"> </span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">a</span><span class="p">,</span><span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b</span><span class="p">;</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="n">buffer</span>
|
|
<span class="p">);</span>
|
|
|
|
<span class="c1">// synchronize the execution</span>
|
|
<span class="n">stream</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span>
|
|
|
|
<span class="c1">// delete the memory</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">buffer</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">res</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">vec</span><span class="p">);</span></pre><p>The reduce algorithm runs <em>asynchronously</em> through the stream specified in the execution policy. You need to synchronize the stream to obtain correct results. Since the GPU reduction algorithm may require extra buffer to store the temporary results, you need to provide a buffer of size at least larger or equal to the value returned from <code><a href="classtf_1_1cudaExecutionPolicy.html#a446cee95bb839ee180052059e2ad7fd6" class="m-doc">tf::<wbr />cudaDefaultExecutionPolicy::<wbr />reduce_bufsz</a></code>.</p><aside class="m-note m-warning"><h4>Attention</h4><p>You must keep the buffer alive before the reduction completes.</p></aside></section><section id="CUDASTDReduceItemsWithoutAnInitialValue"><h2><a href="#CUDASTDReduceItemsWithoutAnInitialValue">Reduce a Range of Items without an Initial Value</a></h2><p><a href="namespacetf.html#a492e8410db032a0273a99dd905486161" class="m-doc">tf::<wbr />cuda_uninitialized_reduce</a> performs a parallel reduction over a range of items without an initial value. This method represents a parallel execution of the following reduction loop on a GPU:</p><pre class="m-code"><span class="o">*</span><span class="n">result</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="o">*</span><span class="n">first</span><span class="o">++</span><span class="p">;</span><span class="w"> </span><span class="c1">// no initial values to participate in the reduction loop</span>
|
|
<span class="k">while</span><span class="w"> </span><span class="p">(</span><span class="n">first</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="n">last</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
|
|
<span class="w"> </span><span class="o">*</span><span class="n">result</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">bop</span><span class="p">(</span><span class="o">*</span><span class="n">result</span><span class="p">,</span><span class="w"> </span><span class="o">*</span><span class="n">first</span><span class="o">++</span><span class="p">);</span>
|
|
<span class="p">}</span></pre><p>The variable <code>result</code> is directly assigned the reduced value without any initial value participating in the reduction loop. The following code performs a parallel reduction to sum all the numbers in the given range without any initial value:</p><pre class="m-code"><span class="k">const</span><span class="w"> </span><span class="kt">size_t</span><span class="w"> </span><span class="n">N</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1000000</span><span class="p">;</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">res</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="mi">1</span><span class="p">);</span><span class="w"> </span><span class="c1">// result</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span><span class="w"> </span><span class="c1">// vector</span>
|
|
|
|
<span class="c1">// initializes the data</span>
|
|
<span class="k">for</span><span class="p">(</span><span class="kt">size_t</span><span class="w"> </span><span class="n">i</span><span class="o">=</span><span class="mi">0</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o"><</span><span class="n">N</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o">++</span><span class="p">)</span><span class="w"> </span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">i</span><span class="p">;</span>
|
|
<span class="p">}</span><span class="w"> </span>
|
|
|
|
<span class="c1">// create an execution policy</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaStream</span><span class="w"> </span><span class="n">stream</span><span class="p">;</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span><span class="w"> </span><span class="nf">policy</span><span class="p">(</span><span class="n">stream</span><span class="p">);</span>
|
|
|
|
<span class="c1">// queries the required buffer size to reduce N elements using the given policy</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">bytes</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">policy</span><span class="p">.</span><span class="n">reduce_bufsz</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">buffer</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_device</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">byte</span><span class="o">></span><span class="p">(</span><span class="n">bytes</span><span class="p">);</span>
|
|
|
|
<span class="c1">// *res = 0 + 1 + 2 + 3 + 4 + ... + N-1</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cuda_uninitialized_reduce</span><span class="p">(</span><span class="n">policy</span><span class="p">,</span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">,</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">N</span><span class="p">,</span><span class="w"> </span><span class="n">res</span><span class="p">,</span><span class="w"> </span><span class="p">[]</span><span class="w"> </span><span class="n">__device__</span><span class="w"> </span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">a</span><span class="p">,</span><span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b</span><span class="p">;</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="n">buffer</span>
|
|
<span class="p">);</span>
|
|
|
|
<span class="c1">// synchronize the execution</span>
|
|
<span class="n">stream</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span>
|
|
|
|
<span class="c1">// delete the buffer</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">res</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">vec</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">buffer</span><span class="p">);</span></pre></section><section id="CUDASTDReduceTransformedItemsWithAnInitialValue"><h2><a href="#CUDASTDReduceTransformedItemsWithAnInitialValue">Reduce a Range of Transformed Items with an Initial Value</a></h2><p><a href="namespacetf.html#a4463d06240d608bc31d8b3546a851e4e" class="m-doc">tf::<wbr />cuda_transform_reduce</a> performs a parallel reduction over a range of <em>transformed</em> elements specified by <code>[first, last)</code> using a binary reduce operator <code>bop</code> and a unary transform operator <code>uop</code>. It represents the parallel execution of the following reduction loop on a GPU:</p><pre class="m-code"><span class="k">while</span><span class="w"> </span><span class="p">(</span><span class="n">first</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="n">last</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
|
|
<span class="w"> </span><span class="o">*</span><span class="n">result</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">bop</span><span class="p">(</span><span class="o">*</span><span class="n">result</span><span class="p">,</span><span class="w"> </span><span class="n">uop</span><span class="p">(</span><span class="o">*</span><span class="n">first</span><span class="o">++</span><span class="p">));</span>
|
|
<span class="p">}</span></pre><p>The variable <code>result</code> participates in the reduction loop and must be initialized with an initial value. The following code performs a parallel reduction to sum all the transformed numbers multiplied by <code>10</code> in the given range with an initial value <code>1000</code>:</p><pre class="m-code"><span class="k">const</span><span class="w"> </span><span class="kt">size_t</span><span class="w"> </span><span class="n">N</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1000000</span><span class="p">;</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">res</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="mi">1</span><span class="p">);</span><span class="w"> </span><span class="c1">// result</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span><span class="w"> </span><span class="c1">// vector</span>
|
|
|
|
<span class="c1">// initializes the data</span>
|
|
<span class="o">*</span><span class="n">res</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1000</span><span class="p">;</span>
|
|
<span class="k">for</span><span class="p">(</span><span class="kt">size_t</span><span class="w"> </span><span class="n">i</span><span class="o">=</span><span class="mi">0</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o"><</span><span class="n">N</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o">++</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">i</span><span class="p">;</span>
|
|
<span class="p">}</span><span class="w"> </span>
|
|
|
|
<span class="c1">// create an execution policy</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaStream</span><span class="w"> </span><span class="n">stream</span><span class="p">;</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span><span class="w"> </span><span class="nf">policy</span><span class="p">(</span><span class="n">stream</span><span class="p">);</span>
|
|
|
|
<span class="c1">// queries the required buffer size to reduce N elements using the given policy</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">bytes</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">policy</span><span class="p">.</span><span class="n">reduce_bufsz</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">buffer</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_device</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">byte</span><span class="o">></span><span class="p">(</span><span class="n">bytes</span><span class="p">);</span>
|
|
|
|
<span class="c1">// *res = 1000 + (0*10 + 1*10 + 2*10 + 3*10 + 4*10 + ... + (N-1)*10)</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cuda_transform_reduce</span><span class="p">(</span><span class="n">policy</span><span class="p">,</span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">,</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">N</span><span class="p">,</span><span class="w"> </span><span class="n">res</span><span class="p">,</span><span class="w"> </span>
|
|
<span class="w"> </span><span class="p">[]</span><span class="w"> </span><span class="n">__device__</span><span class="w"> </span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">a</span><span class="p">,</span><span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b</span><span class="p">;</span><span class="w"> </span><span class="p">},</span>
|
|
<span class="w"> </span><span class="p">[]</span><span class="w"> </span><span class="n">__device__</span><span class="w"> </span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">a</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">a</span><span class="o">*</span><span class="mi">10</span><span class="p">;</span><span class="w"> </span><span class="p">},</span>
|
|
<span class="w"> </span><span class="n">buffer</span>
|
|
<span class="p">);</span>
|
|
|
|
<span class="c1">// synchronize the execution</span>
|
|
<span class="n">stream</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span>
|
|
|
|
<span class="c1">// delete the buffer</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">res</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">vec</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">buffer</span><span class="p">);</span></pre></section><section id="CUDASTDReduceTransformedItemsWithoutAnInitialValue"><h2><a href="#CUDASTDReduceTransformedItemsWithoutAnInitialValue">Reduce a Range of Transformed Items without an Initial Value</a></h2><p>tf::cuda_transform_uninitialized_reduce performs a parallel reduction over a range of transformed items without an initial value. This method represents a parallel execution of the following reduction loop on a GPU:</p><pre class="m-code"><span class="o">*</span><span class="n">result</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="o">*</span><span class="n">first</span><span class="o">++</span><span class="p">;</span><span class="w"> </span><span class="c1">// no initial values to participate in the reduction loop</span>
|
|
<span class="k">while</span><span class="w"> </span><span class="p">(</span><span class="n">first</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="n">last</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
|
|
<span class="w"> </span><span class="o">*</span><span class="n">result</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">bop</span><span class="p">(</span><span class="o">*</span><span class="n">result</span><span class="p">,</span><span class="w"> </span><span class="n">uop</span><span class="p">(</span><span class="o">*</span><span class="n">first</span><span class="o">++</span><span class="p">));</span>
|
|
<span class="p">}</span></pre><p>The variable <code>result</code> is directly assigned the reduced value without any initial value participating in the reduction loop. The following code performs a parallel reduction to sum all the transformed numbers multiplied by <code>10</code> in the given range without any initial value:</p><pre class="m-code"><span class="k">const</span><span class="w"> </span><span class="kt">size_t</span><span class="w"> </span><span class="n">N</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1000000</span><span class="p">;</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">res</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="mi">1</span><span class="p">);</span><span class="w"> </span><span class="c1">// result</span>
|
|
<span class="kt">int</span><span class="o">*</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_shared</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span><span class="w"> </span><span class="c1">// vector</span>
|
|
|
|
<span class="c1">// initializes the data</span>
|
|
<span class="k">for</span><span class="p">(</span><span class="kt">size_t</span><span class="w"> </span><span class="n">i</span><span class="o">=</span><span class="mi">0</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o"><</span><span class="n">N</span><span class="p">;</span><span class="w"> </span><span class="n">i</span><span class="o">++</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">i</span><span class="p">;</span>
|
|
<span class="p">}</span><span class="w"> </span>
|
|
|
|
<span class="c1">// create an execution policy</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaStream</span><span class="w"> </span><span class="n">stream</span><span class="p">;</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span><span class="w"> </span><span class="nf">policy</span><span class="p">(</span><span class="n">stream</span><span class="p">);</span>
|
|
|
|
<span class="c1">// queries the required buffer size to reduce N elements using the given policy</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">bytes</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">policy</span><span class="p">.</span><span class="n">reduce_bufsz</span><span class="o"><</span><span class="kt">int</span><span class="o">></span><span class="p">(</span><span class="n">N</span><span class="p">);</span>
|
|
<span class="k">auto</span><span class="w"> </span><span class="n">buffer</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tf</span><span class="o">::</span><span class="n">cuda_malloc_device</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">byte</span><span class="o">></span><span class="p">(</span><span class="n">bytes</span><span class="p">);</span>
|
|
|
|
<span class="c1">// *res = 0*10 + 1*10 + 2*10 + 3*10 + 4*10 + ... + (N-1)*10</span>
|
|
<span class="n">tf</span><span class="o">::</span><span class="n">cuda_uninitialized_reduce</span><span class="p">(</span><span class="n">policy</span><span class="p">,</span>
|
|
<span class="w"> </span><span class="n">vec</span><span class="p">,</span><span class="w"> </span><span class="n">vec</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">N</span><span class="p">,</span><span class="w"> </span><span class="n">res</span><span class="p">,</span><span class="w"> </span>
|
|
<span class="w"> </span><span class="p">[]</span><span class="w"> </span><span class="n">__device__</span><span class="w"> </span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">a</span><span class="p">,</span><span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b</span><span class="p">;</span><span class="w"> </span><span class="p">},</span>
|
|
<span class="w"> </span><span class="p">[]</span><span class="w"> </span><span class="n">__device__</span><span class="w"> </span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">a</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">a</span><span class="o">*</span><span class="mi">10</span><span class="p">;</span><span class="w"> </span><span class="p">},</span>
|
|
<span class="w"> </span><span class="n">buffer</span>
|
|
<span class="p">);</span>
|
|
|
|
<span class="c1">// synchronize the execution</span>
|
|
<span class="n">stream</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span>
|
|
|
|
<span class="c1">// delete the data</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">res</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">vec</span><span class="p">);</span>
|
|
<span class="n">cudaFree</span><span class="p">(</span><span class="n">buffer</span><span class="p">);</span></pre></section>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</article></main>
|
|
<div class="m-doc-search" id="search">
|
|
<a href="#!" onclick="return hideSearch()"></a>
|
|
<div class="m-container">
|
|
<div class="m-row">
|
|
<div class="m-col-m-8 m-push-m-2">
|
|
<div class="m-doc-search-header m-text m-small">
|
|
<div><span class="m-label m-default">Tab</span> / <span class="m-label m-default">T</span> to search, <span class="m-label m-default">Esc</span> to close</div>
|
|
<div id="search-symbolcount">…</div>
|
|
</div>
|
|
<div class="m-doc-search-content">
|
|
<form>
|
|
<input type="search" name="q" id="search-input" placeholder="Loading …" disabled="disabled" autofocus="autofocus" autocomplete="off" spellcheck="false" />
|
|
</form>
|
|
<noscript class="m-text m-danger m-text-center">Unlike everything else in the docs, the search functionality <em>requires</em> JavaScript.</noscript>
|
|
<div id="search-help" class="m-text m-dim m-text-center">
|
|
<p class="m-noindent">Search for symbols, directories, files, pages or
|
|
modules. You can omit any prefix from the symbol or file path; adding a
|
|
<code>:</code> or <code>/</code> suffix lists all members of given symbol or
|
|
directory.</p>
|
|
<p class="m-noindent">Use <span class="m-label m-dim">↓</span>
|
|
/ <span class="m-label m-dim">↑</span> to navigate through the list,
|
|
<span class="m-label m-dim">Enter</span> to go.
|
|
<span class="m-label m-dim">Tab</span> autocompletes common prefix, you can
|
|
copy a link to the result using <span class="m-label m-dim">⌘</span>
|
|
<span class="m-label m-dim">L</span> while <span class="m-label m-dim">⌘</span>
|
|
<span class="m-label m-dim">M</span> produces a Markdown link.</p>
|
|
</div>
|
|
<div id="search-notfound" class="m-text m-warning m-text-center">Sorry, nothing was found.</div>
|
|
<ul id="search-results"></ul>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<script src="search-v2.js"></script>
|
|
<script src="searchdata-v2.js" async="async"></script>
|
|
<footer><nav>
|
|
<div class="m-container">
|
|
<div class="m-row">
|
|
<div class="m-col-l-10 m-push-l-1">
|
|
<p>Taskflow handbook is part of the <a href="https://taskflow.github.io">Taskflow project</a>, copyright © <a href="https://tsung-wei-huang.github.io/">Dr. Tsung-Wei Huang</a>, 2018–2024.<br />Generated by <a href="https://doxygen.org/">Doxygen</a> 1.9.1 and <a href="https://mcss.mosra.cz/">m.css</a>.</p>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</nav></footer>
|
|
</body>
|
|
</html>
|