How can I monitor what is going on with my Linux system?

You may have to install this tool called dstat, but then run:

dstat -tcndylp –top-cpu

This will list a snapshot of system behavior every second which can tell you if your CPUs are pegged, memory usage is maxed, or if IO is getting hammered. All very important for understanding why a system might be behaving strangely.

Advertisements

How can I easily create several threads in C++?

For testing or even applications which need to run a bunch of things at the same time, kicking off multiple threads at the same time is really nice.

For instance, if you are building a logging mechanism, you may want to make sure it is thread safe so that messages don’t end up garbled. We could test that our logger is working by sending it messages from 10 threads at the same time.

//*.cxx
#include <iostream>
#include <mutex>
#include <string>
#include <thread>
#include <vector>

namespace {
    std::mutex g_msgLock;
}

void info(const char * msg) {
    std::unique_lock<std::mutex> lock(g_msgLock);
    std::cout << msg << '\n'; // don't flush
}

int main(int argc, char** argv) {
    info("Start message..");

    std::vector<std::thread> threads;
    unsigned int threadCount = 10;
    threads.reserve(threadCount);

    for (unsigned int i = 0; i < threadCount; i++) {
        // Here we start the threads using lambdas
        threads.push_back(std::thread([&, i](){
            std::string msg = std::string("THREADED_TEST_INFO_MESSAGE: ") + std::to_string(i);
            info(msg.c_str());
            }));
    }

    for(auto& thread : threads){
        thread.join();
    }
    info("End message..");
}

Now a couple of things to note.

If I would’ve used the reference capture [&], then I could be referencing the i variable at a time where it was being updated by another thread, so we use the capture by value [=] here so that we get the value of i at the time the thread was started. If we used the reference, then we could see output like:

THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 3
THREADED_TEST_INFO_MESSAGE: 8
THREADED_TEST_INFO_MESSAGE: 6
THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 9
THREADED_TEST_INFO_MESSAGE: 10

Also, every thread is started when we push the thread back into the vector. This means that the threads could actually finish work before we get to the next loop, which would mean that we never really stressed the logger with a bunch of writes at the same time. However, you can see from output like this that we are stressing it:

THREADED_TEST_INFO_MESSAGE: 3
THREADED_TEST_INFO_MESSAGE: 2
THREADED_TEST_INFO_MESSAGE: 1
THREADED_TEST_INFO_MESSAGE: 0
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 6
THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 7
THREADED_TEST_INFO_MESSAGE: 8
THREADED_TEST_INFO_MESSAGE: 9

Let’s say that we weren’t sure that we were hitting the logger at the same time and wanted to ensure that. We can use C++14’s feature of shared_mutex to control this nicely:

//*.cxx
#include <iostream>
#include <mutex>
#include <shared_mutex> // requires c++14
#include <string>
#include <thread>
#include <vector>

namespace {
    std::mutex g_msgLock;
    std::shared_timed_mutex g_testingLock;
}

void info(const char * msg) {
    std::unique_lock<std::mutex> lock(g_msgLock);
    std::cout << msg << '\n'; // don't flush
}

int main(int argc, char** argv) {
    info("Start message..");

    std::vector<std::thread> threads;
    unsigned int threadCount = 10;
    threads.reserve(threadCount);

    { // Scope for locking all threads
        std::lock_guard<std::shared_timed_mutex> lockAllThreads(g_testingLock); // RAII (scoped) lock

        for (unsigned int i = 0; i < threadCount; i++) {
            // Here we start the threads using lambdas
            threads.push_back(std::thread([&, i](){
                // Here we block and wait on lockAllThreads
                std::shared_lock<std::shared_timed_mutex> threadLock(g_testingLock);
                std::string msg = std::string("THREADED_TEST_INFO_MESSAGE: ") + std::to_string(i);
                info(msg.c_str());
                }));
        }

    } // End of scope, lock is released, all threads continue now

    for(auto& thread : threads){
        thread.join();
    }
    info("End message..");
}

How to auto-run a script on boot with Linux / Raspberry PI

If you want something to happen when you login, then there are several ways to do that, but that is not what this post is about.

With a Raspberry PI it can be very nice to configure the device to have a service running or to do some operation which you want to happen at boot and without user interaction, to do this use:

sudo vi /etc/rc.local

If you have a problem with this script or you need to break out of it running (similar to CTRL+C in a shell) then use:

alt + print screen + K

Have a nice day!

Creating a Vector with Class Storage using C++11 smart_pointers

/* Sometimes we need to construct several classes and then pass 
them somewhere to do something, but we don't have an abstract 
storage method. The unique_ptr can help */

// Example program
#include <iostream>
#include <memory>
#include <string>
#include <vector>

class Test {
 public:
  std::string m_val; 
  Test(const std::string & val) : m_val(val) {}
  ~Test() {}
  std::string getVal() const {return m_val;}
};

void printValues(std::vector<const Test*> & tests)
{
  for (auto & test : tests)
    std::cout << "Test.m_val: " << test->getVal() << std::endl;
}

int main()
{
  std::vector< std::unique_ptr<const Test> > tests;
  tests.emplace_back(new Test("test 1"));
  tests.emplace_back(new Test("test 2"));

  std::vector<const Test*> refs;
  for (auto & test : tests)
    refs.push_back(test.get());

  printValues(refs); 
} 
/* I don't know about you, but I think this is pretty elegant. We have 
clear communication of where storage is being kept. The unique_ptr will 
be cleaned up at the end of 'tests' scope (keeping in alignment with RAII), 
and the functions are user friendly in that they expect raw pointers and as 
such they work for a container class and for our immediate storage needs. */

How to compare vim’s in-memory buffers

I often find myself just want to compare the contents of buffers in memory as I may use something like “:sort” or a find and replace and not necessarily want to save that modified buffer to disk. Well vim provides a very handy way to do this…

First, if you opened a comparison using gvimdiff or you already ran a diff, then you will want to reset your diff using:

:windo diffoff

From here you can perform the diff using the new contents of the buffers with:

:windo diffthis

That is that!

How can I see where libraries are being loaded from on Linux?

To figure out where libraries are being loaded from, if you have your environment already setup in the same way for which you want to test, then you can run:

/sbin/ldconfig -N -v

However, this will not search LD_LIBRARY_PATH so you must also include that manually:

/sbin/ldconfig -N -v $(sed ‘s/:/ /g’ <<< $LD_LIBRARY_PATH)

If you would like to see which libraries are actually being loaded when running an executable, then use:

strace myprog

This will show you a lot more than you care to see (all system calls), but if you grep the results for “^open.*\.so”, then you will see all of the *.so files which are being opened from that process.

I also see several processes which fork child processes and strace will not report system calls for these by default. However, you can add the ‘-f’ switch to strace and then all child processes will be reported:

strace -f myprog

This will produce a fair amount of noise, but you can filter that with:

strace -e trace=open -f myprog