Shutting down dormant threads
Often work items come in batches, the thread pool gets busy, expands, services all of the work items and then becomes less busy. At this point the pool contains threads which aren't being used but which are still consuming resources. These dormant threads can be safely shutdown as the pool can expand again as load increases. The question is, how do we decide when to shut down some threads?
The maintenance thread that handles our blocking dispatch also handles checking for dormant threads. Every so often (a configurable amount) the maintenance thread uses an algorithm to determine if it should shut some threads down. The current algorithm is as follows:
void CThreadPool::HandleDormantThreads()
{
if ((size_t)m_activeThreads > m_minThreads)
{
const size_t dormantThreads = m_activeThreads - m_processingThreads;
if (dormantThreads > m_maxDormantThreads)
{
const size_t threadsToShutdown = (dormantThreads - m_maxDormantThreads) / 2 + 1;
StopWorkerThreads(threadsToShutdown);
}
}
}
If we have more threads than the minimum number we're allowed to have, find out how many threads aren't currently processing work items and if that number is more than the number of dormant threads that we're allowed to have, shut half of them down (rounding up). Stopping worker threads is a simple case of posting an IO completion key of 0 to the work port for each worker thread that we want to shut down.
Doing the work
We now have a thread pool that fulfills our requirements of automatic expansion and contraction depending upon load and non blocking dispatch for users. The remaining thing to do is allow the derived class to provide its own WorkerThread class to do the work. The worker thread class must implement the following interface:
virtual bool Initialise();
virtual void Process(
ULONG_PTR completionKey,
DWORD dwNumBytes,
OVERLAPPED *pOverlapped) = 0;
virtual void Shutdown();
Initialise() is called when it's first created, Shutdown() is called when the thread is terminating and Process() is called for each work item.
Comments