In a multithreaded application, everything looks cool and interesting until we get struck with a deadlock. Assume there are two threads, namely READER and WRITER. Deadlocks might happen when the READER thread waits for a lock that is already acquired by WRITER and the WRITER thread waits for the reader to release a lock that is owned by READER and vice versa. Typically, in a deadlock scenario, both the threads will wait for each other endlessly.
Generally, deadlocks are design issues. At times, deadlocks could be detected quickly, but sometimes it might get very tricky to find the root cause. Hence, the bottom line is synchronization mechanisms must be used in the right sense thoughtfully.
Let's understand the concept of a deadlock with a simple yet practical example. I'm going to reuse our Thread class with some slight modifications to create a deadlock scenario.
The modified Thread.h header looks as follows:
#ifndef __THREAD_H
#define __THREAD_H
#include <iostream>
#include <string>
#include <thread>
#include <mutex>
#include <string>
using namespace std;
enum ThreadType {
READER,
WRITER
};
class Thread {
private:
string name;
thread *pThread;
ThreadType threadType;
static mutex commonLock;
static int count;
bool stopped;
void run( );
public:
Thread ( ThreadType typeOfThread );
~Thread( );
void start( );
void stop( );
void join( );
void detach ( );
int getCount( );
int updateCount( );
};
#endif
The ThreadType enumeration helps assign a particular task to a thread. The Thread class has two new methods: Thread::getCount() and Thread::updateCount(). Both the methods will be synchronized with a common mutex lock in such a way that it creates a deadlock scenario.
Okay, let's move on and review the Thread.cpp source file:
#include "Thread.h"
mutex Thread::commonLock;
int Thread::count = 0;
Thread::Thread( ThreadType typeOfThread ) {
pThread = NULL;
stopped = false;
threadType = typeOfThread;
(threadType == READER) ? name = "READER" : name = "WRITER";
}
Thread::~Thread() {
delete pThread;
pThread = NULL;
}
int Thread::getCount( ) {
cout << name << " is waiting for lock in getCount() method ..." <<
endl;
lock_guard<mutex> locker(commonLock);
return count;
}
int Thread::updateCount( ) {
cout << name << " is waiting for lock in updateCount() method ..." << endl;
lock_guard<mutex> locker(commonLock);
int value = getCount();
count = ++value;
return count;
}
void Thread::run( ) {
while ( 1 ) {
switch ( threadType ) {
case READER:
cout << name<< " => value of count from getCount() method is " << getCount() << endl;
this_thread::sleep_for ( 500ms );
break;
case WRITER:
cout << name << " => value of count from updateCount() method is" << updateCount() << endl;
this_thread::sleep_for ( 500ms );
break;
}
}
}
void Thread::start( ) {
pThread = new thread ( &Thread::run, this );
}
void Thread::stop( ) {
stopped = true;
}
void Thread::join( ) {
pThread->join();
}
void Thread::detach( ) {
pThread->detach( );
}
By now, you will be quite familiar with the Thread class. Hence, let's focus our discussion on the Thread::getCount() and Thread::updateCount() methods. The std::lock_guard<std::mutex> is a template class that frees us from calling mutex::unlock(). During the stack unwinding process, the lock_guard destructor will be invoked; this will invoke mutex::unlock().
The bottom line is that from the point the std::lock_guard<std::mutex> instance is created, all the statements that appear until the end of the method are secured by the mutex.
Okay, let's plunge into the main.cpp file:
#include <iostream>
using namespace std;
#include "Thread.h"
int main ( ) {
Thread reader( READER );
Thread writer( WRITER );
reader.start( );
writer.start( );
reader.join( );
writer.join( );
return 0;
}
The main() function is pretty self-explanatory. We have created two threads, namely reader and writer, and they are started after the respective threads are created. The main thread is forced to wait until the reader and writer threads exit.