How can I easily create several threads in C++?

For testing or even applications which need to run a bunch of things at the same time, kicking off multiple threads at the same time is really nice.

For instance, if you are building a logging mechanism, you may want to make sure it is thread safe so that messages don’t end up garbled. We could test that our logger is working by sending it messages from 10 threads at the same time.

//*.cxx
#include <iostream>
#include <mutex>
#include <string>
#include <thread>
#include <vector>

namespace {
    std::mutex g_msgLock;
}

void info(const char * msg) {
    std::unique_lock<std::mutex> lock(g_msgLock);
    std::cout << msg << '\n'; // don't flush
}

int main(int argc, char** argv) {
    info("Start message..");

    std::vector<std::thread> threads;
    unsigned int threadCount = 10;
    threads.reserve(threadCount);

    for (unsigned int i = 0; i < threadCount; i++) {
        // Here we start the threads using lambdas
        threads.push_back(std::thread([&, i](){
            std::string msg = std::string("THREADED_TEST_INFO_MESSAGE: ") + std::to_string(i);
            info(msg.c_str());
            }));
    }

    for(auto& thread : threads){
        thread.join();
    }
    info("End message..");
}

Now a couple of things to note.

If I would’ve used the reference capture [&], then I could be referencing the i variable at a time where it was being updated by another thread, so we use the capture by value [=] here so that we get the value of i at the time the thread was started. If we used the reference, then we could see output like:

THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 3
THREADED_TEST_INFO_MESSAGE: 8
THREADED_TEST_INFO_MESSAGE: 6
THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 9
THREADED_TEST_INFO_MESSAGE: 10

Also, every thread is started when we push the thread back into the vector. This means that the threads could actually finish work before we get to the next loop, which would mean that we never really stressed the logger with a bunch of writes at the same time. However, you can see from output like this that we are stressing it:

THREADED_TEST_INFO_MESSAGE: 3
THREADED_TEST_INFO_MESSAGE: 2
THREADED_TEST_INFO_MESSAGE: 1
THREADED_TEST_INFO_MESSAGE: 0
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 6
THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 7
THREADED_TEST_INFO_MESSAGE: 8
THREADED_TEST_INFO_MESSAGE: 9

Let’s say that we weren’t sure that we were hitting the logger at the same time and wanted to ensure that. We can use C++14’s feature of shared_mutex to control this nicely:

//*.cxx
#include <iostream>
#include <mutex>
#include <shared_mutex> // requires c++14
#include <string>
#include <thread>
#include <vector>

namespace {
    std::mutex g_msgLock;
    std::shared_timed_mutex g_testingLock;
}

void info(const char * msg) {
    std::unique_lock<std::mutex> lock(g_msgLock);
    std::cout << msg << '\n'; // don't flush
}

int main(int argc, char** argv) {
    info("Start message..");

    std::vector<std::thread> threads;
    unsigned int threadCount = 10;
    threads.reserve(threadCount);

    { // Scope for locking all threads
        std::lock_guard<std::shared_timed_mutex> lockAllThreads(g_testingLock); // RAII (scoped) lock

        for (unsigned int i = 0; i < threadCount; i++) {
            // Here we start the threads using lambdas
            threads.push_back(std::thread([&, i](){
                // Here we block and wait on lockAllThreads
                std::shared_lock<std::shared_timed_mutex> threadLock(g_testingLock);
                std::string msg = std::string("THREADED_TEST_INFO_MESSAGE: ") + std::to_string(i);
                info(msg.c_str());
                }));
        }

    } // End of scope, lock is released, all threads continue now

    for(auto& thread : threads){
        thread.join();
    }
    info("End message..");
}
Advertisements

How to auto-run a script on boot with Linux / Raspberry PI

If you want something to happen when you login, then there are several ways to do that, but that is not what this post is about.

With a Raspberry PI it can be very nice to configure the device to have a service running or to do some operation which you want to happen at boot and without user interaction, to do this use:

sudo vi /etc/rc.local

If you have a problem with this script or you need to break out of it running (similar to CTRL+C in a shell) then use:

alt + print screen + K

Have a nice day!

Creating a Vector with Class Storage using C++11 smart_pointers

/* Sometimes we need to construct several classes and then pass 
them somewhere to do something, but we don't have an abstract 
storage method. The unique_ptr can help */

// Example program
#include <iostream>
#include <memory>
#include <string>
#include <vector>

class Test {
 public:
  std::string m_val; 
  Test(const std::string & val) : m_val(val) {}
  ~Test() {}
  std::string getVal() const {return m_val;}
};

void printValues(std::vector<const Test*> & tests)
{
  for (auto & test : tests)
    std::cout << "Test.m_val: " << test->getVal() << std::endl;
}

int main()
{
  std::vector< std::unique_ptr<const Test> > tests;
  tests.emplace_back(new Test("test 1"));
  tests.emplace_back(new Test("test 2"));

  std::vector<const Test*> refs;
  for (auto & test : tests)
    refs.push_back(test.get());

  printValues(refs); 
} 
/* I don't know about you, but I think this is pretty elegant. We have 
clear communication of where storage is being kept. The unique_ptr will 
be cleaned up at the end of 'tests' scope (keeping in alignment with RAII), 
and the functions are user friendly in that they expect raw pointers and as 
such they work for a container class and for our immediate storage needs. */

How to compare vim’s in-memory buffers

I often find myself just want to compare the contents of buffers in memory as I may use something like “:sort” or a find and replace and not necessarily want to save that modified buffer to disk. Well vim provides a very handy way to do this…

First, if you opened a comparison using gvimdiff or you already ran a diff, then you will want to reset your diff using:

:windo diffoff

From here you can perform the diff using the new contents of the buffers with:

:windo diffthis

That is that!

How can I see where libraries are being loaded from on Linux?

To figure out where libraries are being loaded from, if you have your environment already setup in the same way for which you want to test, then you can run:

/sbin/ldconfig -N -v

However, this will not search LD_LIBRARY_PATH so you must also include that manually:

/sbin/ldconfig -N -v $(sed ‘s/:/ /g’ <<< $LD_LIBRARY_PATH)

If you would like to see which libraries are actually being loaded when running an executable, then use:

strace myprog

This will show you a lot more than you care to see (all system calls), but if you grep the results for “^open.*\.so”, then you will see all of the *.so files which are being opened from that process.

I also see several processes which fork child processes and strace will not report system calls for these by default. However, you can add the ‘-f’ switch to strace and then all child processes will be reported:

strace -f myprog

This will produce a fair amount of noise, but you can filter that with:

strace -e trace=open -f myprog

In C++ when should I use std::endl versus “\n”?

I see so many people confused about the use of “std::endl” versus “\n”, so here I explain when they should be used…

std::endl

The use of std::endl will insert an end of line character.

It will also flush the pipe in which it is used on! This is important to know and I will explain why below.

“\n”

The use of “\n” will insert an end of line character.

When should I use each?

It depends on your use case, but generally you should use “\n”, unless you need to flush the buffer for some reason.

When should I flush the buffer?

When you write to a standardized stream the output is buffered and then after some X number of chars or other special events (e.g. std::endl, std::flush) the buffer is flushed and all data is written to the stream.

Using this built-in buffer helps speed up your program as writing to these streams can be very slow, especially if you are going to disk. However, if you can minimize how often you write to the stream and instead write in large bursts less often then the overall program will run faster (hence the purpose of buffering streams).

However, if the program exits and unflushed buffers exist, then they are not guaranteed to be flushed before termination.

You have probably seen this behavior when some messages weren’t appearing even though you knew you had hit a line of code that wrote to that stream. This is also why some streams (e.g. std::cerr) are completely unbuffered.

If you write an error message to a buffered pipe and then throw an exception, it is unlikely that the error message will be printed before the program exits. This is why error messaging pipes are configured (by default) to be unbuffered.

For streaming to disk

If you are calling std::endl for every line you stream to disk, then you are making your program unnecessarily slow and hammering the disk IO (maybe even network IO) which is very inefficient.

While you certainly shouldn’t flush every line you write to disk, it is important to flush the buffer to disk once you are done writing to it.

For writing to a display

Like I mentioned early std::cerr (STDERR) is unbuffered, but std::cout (STDOUT) is buffered.

If you want to call std::endl when writing to std::cout to guarantee that those lines are printed in case the program exits then you can do that, but make sure you have good justification for writing to std::cout. Seeing as std::cerr is unbuffered, you should ask yourself if std::cerr is a better fit for your use-case?

You can use flush lines you write to std::cout (STDOUT), but it is inefficient. TheĀ  std::cerr (STDERR) stream is unbuffered and was designed for this purpose, so consider using it instead.