Setting up Ruby on Rails with Passenger + Nginx in a CentOS 7 VM running on Google Cloud Platform

Perspective

All commands are written (unless explicitly stated) from the perspective of a non-root user with sudo permissions. The intent is to create a user which will run the application we are creating, but that user will not have sudo permissions.

Machine Setup

I started here:

The only real difference was that I select the CentOS 7 OS.

Tool Setup

From here:

sudo yum install -y curl gpg gcc gcc-c++ make

Install RVM:

sudo yum -y install tar which sudo yum -y install patch libyaml-devel libffi-devel glibc-headers autoconf gcc-c++ glibc-devel readline-devel zlib-devel openssl-devel bzip2 automake libtool bison
sudo gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
curl -sSL https://get.rvm.io | sudo bash -s stable 
sudo usermod -a -G rvm `whoami`

Setup RVM secure_path:

if sudo grep -q secure_path /etc/sudoers; then sudo sh -c "echo export rvmsudo_secure_path=1 >> /etc/profile.d/rvm_secure_path.sh" && echo Environment variable installed; fi

Start a new shell and install the latest ruby and bundler:

bash
rvm install ruby
rvm --default use ruby
#Using /home/nik/.rvm/gems/ruby-2.6.3
gem install bundler

Then install Node.js for Ruby on Rails

sudo yum install -y epel-release
sudo yum install -y --enablerepo=epel nodejs npm

Nginx+Passenger+Ruby Setup

Then to get nginx + ruby running I followed directions here:

sudo yum install -y epel-release yum-utils
sudo yum-config-manager --enable epel
sudo yum clean all && sudo yum update -y

Date configuration:

date
# if the output of date is wrong, please follow these instructions to install ntp
sudo yum install -y ntp
sudo chkconfig ntpd on
sudo ntpdate pool.ntp.org
sudo service ntpd start

Install Passenger+Nginx

sudo yum install -y pygpgme curl
sudo curl --fail -sSLo /etc/yum.repos.d/passenger.repo https://oss-binaries.phusionpassenger.com/yum/definitions/el-passenger.repo
sudo yum install -y nginx passenger || sudo yum-config-manager --enable cr && sudo yum install -y nginx passenger

Uncomment the following settings in /etc/nginx/conf.d/passenger.conf:

passenger_root /some-filename/locations.ini;
passenger_ruby /usr/bin/ruby;
passenger_instance_registry_dir /var/run/passenger-instreg;

Edit the file and then restart the service:

sudo vi /etc/nginx/conf.d/passenger.conf
sudo service nginx restart
sudo /usr/bin/passenger-config validate-install
sudo /usr/sbin/passenger-memory-stats

User Setup

We should not run our app as root because we will give too much power to the app if it gets hacked, and we should not run it as our user, because we may want different setups for different apps, which may all be running on the same instance.

NEWUSER=myappuser
sudo adduser $NEWUSER
sudo mkdir -p ~$NEWUSER/.ssh 
touch $HOME/.ssh/authorized_keys
sudo sh -c "cat $HOME/.ssh/authorized_keys >> ~$NEWUSER/.ssh/authorized_keys" 
sudo chown -R $NEWUSER: ~$NEWUSER/.ssh 
sudo chmod 700 ~$NEWUSER/.ssh 
sudo sh -c "chmod 600 ~$NEWUSER/.ssh/*"
sudo usermod -a -G rvm $NEWUSER

Git Project Setup

Setup the project “deploy to” directory:

sudo yum install -y git
sudo mkdir -p /var/www/appname
sudo chown $NEWUSER: /var/www/appname

Create a project on your SCM service site (GitHub, BitBucket, etc.).

Then setup ssh so you can clone.

sudo su $NEWUSER
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub

Then add the ssh key to the service.

cd /var/www/appname
git clone git@<service>.com:<user>/myproject.git ./

I just copied the example, then moved all of that into my repo:

git clone --bare https://github.com/phusion/passenger-ruby-rails-demo.git ./

Run the Project

From here:

rvm use ruby-2.6.3
#Using /usr/local/rvm/gems/ruby-2.6.3

Now install all of the gems:

bundle install --deployment --without development test

Generate the secret for Rails

bundle exec rake secret

Put that output into: config/secrets.yml

chmod 700 config db 
chmod 600 config/database.yml config/secrets.yml

Now create the DB and tables:

bundle exec rake assets:precompile db:migrate RAILS_ENV=production

This produced:

rails aborted!
LoadError: Error loading the 'sqlite3' Active Record adapter. Missing a gem it depends on? can't activate sqlite3 (~> 1.3.6), already activated sqlite3-1.4.0. Make sure all dependencies are added to Gemfile.
...

Caused by:
Gem::LoadError: can't activate sqlite3 (~> 1.3.6), already activated sqlite3-1.4.0. Make sure all dependencies are added to Gemfile.
...

Looks like the Gemfile didn’t specify the dependency for ActiveRecord of < 1.4 and as such mine installed the sqlite3 version of 1.4.0 which is not compatible.

An attempt to add the following to the Gemfile:

gem 'sqlite3', '~> 1.3', '< 1.4'

Will cause:

bundle exec rake assets:precompile db:migrate RAILS_ENV=production

We are testing out a production environment. The normal process is to modify and change this stuff on a development machine and submit the changes for both the Gemfile and the Gemfile.lock to the SCM repo and as such when the production server receives the changes it will get them synchronized.

Because we are testing our setup on a production server for now, we will disable this feature, but we must re-enable it before actually going to production!

To do this we will unset the frozen configuration, but first let’s see how our machine is setup so we can restore it later:

$ bundle config frozen
Settings for `frozen` in order of priority. The top value will be used
Set for your local app (/var/www/nitrogen/.bundle/config): true
Set for the current user (/home/nitrogen/.bundle/config): false

So for now:

bundle config --local frozen false
bundle install --deployment --without development test
#Installing sqlite3 1.3.13 (was 1.4.0) with native extensions
passenger-config about ruby-command

Now update the config file:

  • /etc/nginx/conf.d/myapp.conf
server {
    listen 80;
    server_name yourserver.com;

    # Tell Nginx and Passenger where your app's 'public' directory is
    root /var/www/myapp/code/public;

    # Turn on Passenger
    passenger_enabled on;
    passenger_ruby /path-to-ruby;
}

Domain Setup

In GCP under Networking > VPC Network > External IP Addresses > change it to Static

This IP was automatically assigned to my VM. I was able to go to that IP address and see my nginx configuration page.

I then went to my domain host and created an A record to this static IP.

My domain, then resolved to the correct IP address.

Now when I visit the domain instead of seeing the nginx configuration page I see:

500 Internal Server Error

This is because when you connect from the domain it knows and tries to load that site (from the configuration file we just updated). However, while loading the app we hit an error, so let’s see what that was:

sudo tail /var/log/nginx/access.log

Hmmm looks like a permissions issue:

2019/08/07 12:34:00 [alert] 25410#0: *34 Cannot stat '/var/log/nginx/access.log': Permission denied (errno=13); This error means that the Nginx worker process (PID 54321, running as UID 987) does not have permission to access this file. Please read this page to learn how to fix this problem: https://www.phusionpassenger.com/library/admin/nginx/troubleshooting/?a=upon-accessing-the-web-app-nginx-reports-a-permission-denied-error

To verify which user this service is running as (987):

$ grep 987 /etc/passwd
nginx:x:987:876:Nginx web server:/var/lib/nginx:/sbin/nologin

Looking at permissions, they look fine:

$ namei -l /var/www/myapp/code/config.ru
f: /var/www/myapp/code/config.ru
dr-xr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root www
drwxr-xr-x myappuser myappuser myapp
drwxr-xr-x myappuser myappuser code
-rw-rw-r-- myappuser myappuser config.ru

It looks like nginx should be able to read the file, just to make sure..

sudo -u nginx tail /var/www/myapp/code/config.ru

This works fine so wtf…

Well it looks like Security Enhanced Linux strikes again. To fix this:

chcon -Rt httpd_sys_content_t /var/www/myapp/code

Bingo:

Hello world!

Congratulations, you are running this app in Passenger!

Advertisements

Internal Bash Variable – PIPESTATUS

If you aren’t aware, there are a lot of useful internal BASH variables, some of which can be found here:

https://www.tldp.org/LDP/abs/html/internalvariables.html

One of particular use is PIPESTATUS. This can be very useful if you need to pipe a commands output to another command, and you need to check the return code.

For instance:

my_super_command | tee my.log

If my_super_command fails and you print the last return code you will see everything worked just fine:

echo $?

However, if you print the PIPESTATUS[0], then you can see the error:

echo $PIPESTATUS[0]

How to auto-run a script on boot with Linux / Raspberry PI

If you want something to happen when you login, then there are several ways to do that, but that is not what this post is about.

With a Raspberry PI it can be very nice to configure the device to have a service running or to do some operation which you want to happen at boot and without user interaction, to do this use:

sudo vi /etc/rc.local

If you have a problem with this script or you need to break out of it running (similar to CTRL+C in a shell) then use:

alt + print screen + K

Have a nice day!

How can I see where libraries are being loaded from on Linux?

To figure out where libraries are being loaded from, if you have your environment already setup in the same way for which you want to test, then you can run:

/sbin/ldconfig -N -v

However, this will not search LD_LIBRARY_PATH so you must also include that manually:

/sbin/ldconfig -N -v $(sed ‘s/:/ /g’ <<< $LD_LIBRARY_PATH)

If you would like to see which libraries are actually being loaded when running an executable, then use:

strace myprog

This will show you a lot more than you care to see (all system calls), but if you grep the results for “^open.*\.so”, then you will see all of the *.so files which are being opened from that process.

I also see several processes which fork child processes and strace will not report system calls for these by default. However, you can add the ‘-f’ switch to strace and then all child processes will be reported:

strace -f myprog

This will produce a fair amount of noise, but you can filter that with:

strace -e trace=open -f myprog

Using rpm to find an installed program (RedHat, CentOS, Fedora)

To see what programs are installed use:

rpm -qa

If you just installed a program and want to know where it went then grep for it:

$ rpm -qa | grep vim-X11
vim-X11-7.4.160-2.el7.x86_64

Once you have the name of the package, then you can list the files it installed:

$ rpm -ql vim-X11-7.4.160-2.el7.x86_64
/usr/bin/evim
/usr/bin/gex
/usr/bin/gview
/usr/bin/gvim
/usr/bin/gvimdiff
/usr/bin/gvimtutor
/usr/bin/vimx
/usr/share/applications/gvim.desktop
/usr/share/icons/hicolor/16x16/apps/gvim.png
/usr/share/icons/hicolor/32x32/apps/gvim.png
/usr/share/icons/hicolor/48x48/apps/gvim.png
/usr/share/icons/hicolor/64x64/apps/gvim.png
/usr/share/man/man1/evim.1.gz

There is my gvim I was looking for.

P.S. If these newly installed executables aren’t working from PATH, then try opening a fresh shell.

Searching large source trees in an efficient way on Linux

TL;DR

Here is the alias:

alias search 'find \!:1 -noleaf -type f -not -path "*/boost/*" -not -path "*/extensions/*" -print0 | xargs -0 -n 100 -P 8 grep -I --color -H -n \!:2*'

 

How do I use it?

Here is how I use it:

search [dir] [term] [grep_options]
e.g.
search ./src/ the\ search\ term
search ./src/ keyTerm -A5 -B5

How does it work?

find

This search alias uses find as follows to locate all files under the provided directory (i.e. first argument) while excluding directories that we don’t care about:

find \!:1 -noleaf -type f -not -path "*/boost/*" -not -path "*/extensions/*" -print0

For aliases remember this:

!* is all but the first
!:0 is only the first, the command itself
!:1 is only the first argument
!:2* is all but the first argument
!$ is only the last argument
!:1- is all but the last argument
!! is all
$0 is the shell
$# is the number of args
$$ is the process id (PID)
$! is the PID of the previous command
$? is the return code from the previous command

Thus, the “\!:1” means only the first argument, and the bang (!) has to be escaped.

\!:1

The “-noleaf” is used because I am normally working on Windows/NTFS mounts and it is not safe to assume that directories containing 2 fewer subdirectories than their hard link count only contain files.

-noleaf

We only want to gather files for searching so I use the “-type f”.

-type f

I normally have very large directories which I do not care to search in, so I specify:

-not -path "*/boost/*" -not -path "*/extensions/*"

Finally for the find command I pass “-print0” which returns null (instead of new line) terminated strings. This adds support for paths with spaces in them:

-print0

xargs

The xargs command controls how many files are being passed into grep and it is handling running them in parallel.

xargs -0 -n 100 -P 8 grep -I --color -H -n \!:2*

The “-0” option is used here to tell xargs that the strings coming in are null terminated (this adds support for files with spaces):

-0

The “-n 100” and the “-P 8” options are where the speed and power of this alias come from. The “-n 100” is telling xargs to pass 100 files from find into grep at a time. The “-P 8” is telling xargs to run 8 grep commands in parallel.

This means that if we have a source tree of 1600 files, then grep will be called 16 times and each will be passed 100 files. The best part is that 8 of those grep commands will be running in parallel each on 100 files, so the command finishes as if there were only two (2) grep invocations – very fast even on large source trees:

-n 100 -P 8

grep

The grep command is used to do the actual searching in files.

grep -I --color -H -n \!:2*

The “-I” option ignores binary files:

-I

Colored results make it much easier to see hits:

--color

Because we are passing in the files to grep it may not show the file name where the hit occurred so we add “-H” to print the file name:

-H

The line number is also important, so we add “-n”:

-n

The ability to control grep is handled with an arguments wildcard. Here the “\!:2*” means the second and all subsequent arguments passed into the search alias. Thus the grep search term and all other grep options can be specified after the directory to search:

\!:2*

The final piece is that the xargs command will add the files from the find command to the grep command. It will add 100 (or less if there are less than 100) files to every grep command and each of those will be run in parallel with up to 8 running at any given time.

Enjoy your searching.

How can I easily access my Linux command history? Is there a hot key?

One of the fastest ways to search your previous commands is to use CTRL+R and start typing, once you’ve entered enough text you can use CTRL+R again and again to search your history for matches.

Let’s assume we execute the following commands:

$ echo dog
dog
$ echo cat
cat
$ echo hotdog
hotdog

Now press CTRL+R and you will see a new “bck:” prompt at the bottom:

$
bck:

Now if you type “dog” you will see the last command that had that string anywhere in it, populated on the previous command prompt:

$ echo hotdog
bck:dog

Pressing CTRL+R again will cycle through the history:

$ echo dog
bck:dog

You can press ENTER to execute the command, or CTRL+E to go to the end of the command without executing it.

I also hear you can use CTRL+S to go backwords through the search results, but that never works for me – I believe my terminal or window manager is swallowing the CTRL+S.