Shared threadpool

All C++ code is urged to use the centralized threadpool rather than create own threadpool instances or run (in almost all cases) separate threads. To do it, #include "AuthThreadPool.h" (which is mostly included via scripter.h), and then simply:

#include "../tools/AutoThreadPool.h"

// exeute a 'task' in default automatic execution pool:
async([&]() {
   // put non-blocking code here
});

// And if we need to run blocking code in our worker:

async([&]() {
    Blocking;
    ifstream in("test");
    auto result = static_cast<std::stringstream const&>(std::stringstream() << in.rdbuf()).str();});
    //... use just loaded file
}

Note that Blocking tells the system that enclosing block from this point intends to use blocking operations, so the system enlarges execution pool until the blocking block is completed. This way we try to use as little threads as the hardware can run in parallel and blocking software operations require. This might optimize parallel operations.

Using countdown latch

thc class Latch allows to synchronize easily execution of few tasks for example, when the number of such parallel processes is known, for example:

#include "../tools/latch.h"
#include "../tools/AutoThreadPool.h"

Latch readyLatch(2);
vector<byte> hashPart2;
vector<byte> hashPart3;

async([&](){
    hashPart2 = calculateSha2(data);    
    readyLatch.countDown();
});
async([&](){
    hashPart3 = calculateSha3(data);    
    readyLatch.countDown();
});
vector<byte> hash = calculateGost(data);

// now we should be sure 2 other parts are ready:
readyLatch.wait();

// so we can use results:
hash.reserve(hash.size() + hashPart2.size() + hashPart3.size());
hash.insert(hash.end(), hashPart2.begin(), hashPart2.end());
hash.insert(hash.end(), hashPart3.begin(), hashPart3.end());

// we have calculated 3-part hash in parallel!
somehowUse(hash);