The Bit Anvil

Waiting for multiple objects

There are are times in code where you would like to wait for things to happen in a thread. One common case is where you have some work to do, so you put this work on a queue. Sooner or later you need to shut this work processing thread down. The scenario is therefore to wait for more incoming work items via the queue, or to be told to shutdown.

If you are on a Windows based platform you will probably be able to use WaitForMultipleObjects to achieve this. But on other platforms this sort of functionality isn't available. You might also find, even on Windows that you cannot use WaitForMultipleObjects, because the thing you are trying to wait for isn't a handle that can work with that function.

Here are some solutions to the seemingly quite awkward problem of waiting for more than one thing to become ready in a thread. It relies on a change of tack, with the view to waiting only on a single thing, and build mechanisms around that. The following examples are in Python as generally it's easier to read, and the same, or a similar solution can be used for other languages.

Some people want to try and kill such worker threads using a thread kill function. In general this isn't the best solution because undesirable things may happen, for example:

  • The thread may not release a lock on some object it is holding because it is abruptly killed.
  • The thread may not be able to release resources that it should, causing;
    • A memory leak
    • A resource to become inaccessible
  • Something else you hadn't thought of

However, you may want to do this if it is difficult or impossible to organise a clean thread shutdown. Programs, sadly so have to work, and you do not always have the time to make the behind the scenes machinery shiny. In the main though, it's to be a general aim to have proper thread shutdown procedures.

Use a sentinel message in a queue

This solution works with processing queues. You can put a sentinel value onto the queue that is not a work item, but instead an instruction for the worker thread to stop processing. This is probably one of the best ways of dealing with the queue-or-quit processing scenario. In the code below a sentinel value of None is used to signal queue exit, but you might find it better to put a specific value on the queue instead of none - for example a class ExitProcessingLoop, you can then test the queue with if isinstanceof(msg, ExitProcessingLoop): break

Note - using isinstanaceof, is sometimes considered poor design practice, and you may wish to use object oriented patterns, such as the visitor pattern or whatever is prefereable in your implementation language.

def run(self):
    while True:
        msg = self.main_queue.get()
        if msg == None:
            break
        ...do..queue..work...

def shutdown():
    self.main_queue.put(None)

Python: Don't rely on the Queue's built in locking

Python queues provide thread safe usage by means of a lock around a queue container as well as an event to signal when queue items become ready. This is handy, but it does not solve the problem of shutting down the thread. So here is another way, create your own event instead of using the queue's event.

This has a slight disadvantage in that you cannot simply pass the thread object on to some other part of the code to have it filled by a producer. You have to call the add_to_queue function so that the proper sequence of actions are put into motion to add to the queue, and then signal something is on the queue. There is a solution for this too, which works in most lanaguages; that is to create a wrapper class that has the same api as Queue, and put all the functionality into that class, you can then pass this class around to users.

Note: Some people may suggest just re-using the python Queue event object by patching the medods within; this can be made to work, but do you really want to rely on the internals of the python Queue object which arn't really part of the public api? I would prefer to wrap around it instead.

```python
def add_to_queue(self, msg):
    self.main_queue.put(msg)
    self.my_event.set() # Signal the consumer

def shutdown(self, msg):
    self.quit_event.set() # Set an event / condvar / msg to indicate exit
    self.my_event.set()   # Signal something has happened.

    def run(self)
        while True:
            self.my_event.wait()
            if self.quit_event.is_set() # Is it time to quit?
                break
            msg = self.main_queue.get(False) # Empty exception should not happen here

```

Using signals to stop a thread

On GNU/Linux you can use the pthread_kill function call to kill specific threads. There is no opportunity to shutdown the thread gracefully in this instance. This is the quick and dirty method some people may wish to try. See below for a better method.

Using pthread cancel to end a thread

On GNU/Linux you can use pthread_cancel to request thread cancellation and use pthread_cleanup_push to add a cleanup handler that can be called when the thread terminates. It requires care because the thread may still be cancelled at a time that you do not expect. This is a better solution than the above; however, it might still be better to use the concept of a single wait object / queue because this method isn't portable.

Spinning

This method is generally not desirable, but see the notes at the end. Instead of waiting forever for a event to be signaled, use a timeout to abort the wait after a specific time. In python you might use wait(0), in Linux sem_trywait if you are using a semaphore. e.g.

while True:
    ready =  ready_event.wait(0)
    finish = finish_event.wait(0)
    if finish: 
        break
    elif ready:
        ...do work...
    else:
        threading.sleep(1) # Nothing happened so go to sleep.

This results in 'spinning' where there is frequent wake up check the see if the events have been signalled. The only time this is reasonable is when:

Either:

  • You have work to do at fixed intervals
  • You use a longer delay of several seconds and you don't mind if the queue becomes ready but you do not have to deal with it straight away. There are certain types of problems that are like this, for example lazy updates of statistics.

Using multiple threads

In this scenario, which is also has cross platform applicability you would:

  1. Create a master event semaphore
    1. Create a thread to monitor this master event
  2. Create a thread for each object you wish to wait for
    1. In this per object thread, wait for the object to become signalled.
    2. Signal the master event / semaphore when the waited for object is signalled
  3. The thread monitoring the master event / semaphore can then implement a wait for all / wait for single idiom. You would then either wait for the master monitoring thread to complete (using pthread_join, for example), or perhaps use yet another event object in the master thread to signal completion.

All this is wasteful and it is a big hammer to do a simple job. It does, however, mean that you can implement the WaitForMultipleObjects functionality in a generic way by wrapping all the above up in a small set of functions so long as you have access to a threading and locking API.

Comments