Locks are operators that define a region of code such that only a single thread can traverse that code at any given time. Locks are also known as mutexes (mutual exclusions), although that term can have slightly different meanings on different OS’s. A typical example is to protect multiple simultaneous write operations to a global variable from different threads. For example, if this code is called from multiple threads:
static int val=0;
LOCK;
val++;
UNLOCK;
and the lock were not in place, two threads could attempt to modify val at the same time, both threads reading, updating and writing in separate stages, and so producing incorrect results. This is known as a race condition, and is discussed in more detail later.
Locks may be recursive, so the same thread can acquire the same lock multiple times. However this is generally considered to be a dangerous coding practice, as failure to unlock an equal number of times can lead to hangs as the lock is never released.
Managing locks correctly while avoiding race conditions is one of the most difficult parts of threading.
Modern processors are able to perform certain operations on variables atomically. This means that the operation appears to be completed in a single step when viewed from other threads of execution. As a result, such operations cannot cause race conditions and do not require locking, as there is no possibility that another thread will be able to read the value during the modification operation. On some operating systems, low level functions are provided that directly invoke these atomic operations in hardware. These operations are significantly faster than applying user locks, and should always be used when possible.
A semaphore is a generalization of a mutex lock. Rather than the simple boolean state of a mutex lock, a semaphore contains an internal counter. This counter is decremented each time a thread asks the semaphore for access to the locked resource, and incremented each time a thread releases the resource. If the counter reaches zero, additional threads attempting to acquire the resource will block until one of the holding threads releases the resource. So a semaphore with an initial value of 5 will allow at most 5 threads to access the resource simultaneously.
OSX platform-specific pthreads issues
Although the OSX pthreads implementation complies with the POSIX standard, it does not include all the pthreads and associated functionality available on Linux. Mac OS X platform specific pthreads issues provides more details, and is important for anyone working directly with pthreads on OSX.
These basic threading primitives are useful if you are working at a low level on a single platform. They are complicated to use in cross-platform applications due to the different implementations. They also require a lot of work compared with some higher level threading implementations that will be described later. However for cases where ultimate control over thread behavior is required, including operations such as priority and affinity, native threads are the best option.