tf::PartitionerBase tf::IsPartitioner taskflow/algorithm/partitioner.hpp typename C DefaultClosureWrapper C using tf::PartitionerBase< C >::closure_wrapper_type = C closure_wrapper_type the closure type size_t size_t tf::PartitionerBase< C >::_chunk_size _chunk_size {0} chunk size C C tf::PartitionerBase< C >::_closure_wrapper _closure_wrapper closure wrapper tf::PartitionerBase< C >::PartitionerBase ()=default PartitionerBase default constructor tf::PartitionerBase< C >::PartitionerBase (size_t chunk_size) PartitionerBase size_t chunk_size construct a partitioner with the given chunk size tf::PartitionerBase< C >::PartitionerBase (size_t chunk_size, C &&closure_wrapper) PartitionerBase size_t chunk_size C && closure_wrapper construct a partitioner with the given chunk size and closure wrapper size_t size_t tf::PartitionerBase< C >::chunk_size () const chunk_size query the chunk size of this partitioner void void tf::PartitionerBase< C >::chunk_size (size_t cz) chunk_size size_t cz update the chunk size of this partitioner const C & const C& tf::PartitionerBase< C >::closure_wrapper () const closure_wrapper acquire an immutable access to the closure wrapper object typename F void void tf::PartitionerBase< C >::closure_wrapper (F &&fn) closure_wrapper F && fn modify the closure wrapper object class to derive a partitioner for scheduling parallel algorithms C closure wrapper type The class provides base methods to derive a partitioner that can be used to schedule parallel iterations (e.g., tf::Taskflow::for_each). An partitioner defines the scheduling method for running parallel algorithms, such tf::Taskflow::for_each, tf::Taskflow::reduce, and so on. By default, we provide the following partitioners: tf::GuidedPartitioner to enable guided scheduling algorithm of adaptive chunk size tf::DynamicPartitioner to enable dynamic scheduling algorithm of equal chunk size tf::StaticPartitioner to enable static scheduling algorithm of static chunk size tf::RandomPartitioner to enable random scheduling algorithm of random chunk size Depending on applications, partitioning algorithms can impact the performance a lot. For example, if a parallel-iteration workload contains a regular work unit per iteration, tf::StaticPartitioner can deliver the best performance. On the other hand, if the work unit per iteration is irregular and unbalanced, tf::GuidedPartitioner or tf::DynamicPartitioner can outperform tf::StaticPartitioner. In most situations, tf::GuidedPartitioner can deliver decent performance and is thus used as our default partitioner. Giving the partition size of 0 lets the Taskflow runtime automatically determines the partition size for the given partitioner. In addition to partition size, the application can specify a closure wrapper for a partitioner. A closure wrapper allows the application to wrapper a partitioned task (i.e., closure) with a custom function object that performs additional tasks. For example: std::atomic<int>count=0; tf::Taskflowtaskflow; taskflow.for_each_index(0,100,1, [](){ printf("%d\n",i); }, tf::StaticPartitioner(0,[](auto&&closure){ //dosomethingbeforeinvokingthepartitionedtask //... //invokethepartitionedtask closure(); //dosomethingelseafterinvokingthepartitionedtask //... } ); executor.run(taskflow).wait(); The default closure wrapper (tf::DefaultClosureWrapper) does nothing but invoke the partitioned task (closure). _closure_wrapper tf::PartitionerBase_chunk_size tf::PartitionerBase_closure_wrapper tf::PartitionerBasechunk_size tf::PartitionerBasechunk_size tf::PartitionerBaseclosure_wrapper tf::PartitionerBaseclosure_wrapper tf::PartitionerBaseclosure_wrapper_type tf::PartitionerBasePartitionerBase tf::PartitionerBasePartitionerBase tf::PartitionerBasePartitionerBase