Category Archives: Sys admin

Deploying a Rails app on Nginx/Puma with Capistrano

Puma is a fast multi-threaded Ruby app server designed to host rack-based Ruby web apps including Sinatra and Ruby on Rails. Like Unicorn, it supports rolling restarts, but since it is multi-threaded rather than Unicorn’s multi-process model, it takes far less memory while being comparable in performance. Puma can run on Ruby 1.9.X but its multi-threaded nature is better suited to run on a real multi-threaded runtime like Rubinius or JRuby.

This article will guide you to setting up a hello world Rails app with Puma/Nginx and deploy it with Capistrano onto a linux system. This guide was tested on Puma 1.6.3 and Puma 2.0.1.

Create a base Rails app

rails new appname

Adding Puma to Rails app

We’ll start with adding a puma to your Rails app.

In your Gemfile, add:

gem "puma"

then run bundle install

Now we need a puma config file: config/puma.rb

rails_env = ENV['RAILS_ENV'] || 'development'

threads 4,4

bind  "unix:///data/apps/appname/shared/tmp/puma/appname-puma.sock"
pidfile "/data/apps/appname/current/tmp/puma/pid"
state_path "/data/apps/appname/current/tmp/puma/state"

activate_control_app

Setup Nginx with Puma

Follow the instructions to install Nginx from source. It will install nginx to /usr/local/nginx

Edit your /usr/local/nginx/conf/nginx.conf file to be below:

user deploy;
worker_processes  1;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /usr/local/nginx/conf/mime.types;
    default_type  application/octet-stream;

    access_log  /var/log/nginx/access.log;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;

    server_names_hash_bucket_size 128;
    
    client_max_body_size 4M; 
    client_body_buffer_size 128k;
    
    include /usr/local/nginx/conf/conf.d/*.conf;
    include /usr/local/nginx/conf/sites-enabled/*;
}

Create a file named "puma_app" in the sites-enabled directory:

upstream appname {
  server unix:///data/apps/appname/shared/tmp/puma/appname-puma.sock;
}

server {
  listen 80;
  server_name www.appname.com appname.com;

  keepalive_timeout 5;

  root /data/apps/appname/public;

  access_log /data/log/nginx/nginx.access.log;
  error_log /data/log/nginx/nginx.error.log info;

  if (-f $document_root/maintenance.html) {
    rewrite  ^(.*)$  /maintenance.html last;
    break;
  }

  location ~ ^/(assets)/  {
    root /data/apps/appname/current/public;
    expires max;
    add_header  Cache-Control public;
  }

  location / {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;

    if (-f $request_filename) {
      break;
    }

    if (-f $request_filename/index.html) {
      rewrite (.*) $1/index.html break;
    }

    if (-f $request_filename.html) {
      rewrite (.*) $1.html break;
    }

    if (!-f $request_filename) {
      proxy_pass http://appname;
      break;
    }
  }

  # Now this supposedly should work as it gets the filenames 
  # with querystrings that Rails provides.
  # BUT there's a chance it could break the ajax calls.
  location ~* \.(ico|css|gif|jpe?g|png)(\?[0-9]+)?$ {
     expires max;
     break;
  }

  location ~ ^/javascripts/.*\.js(\?[0-9]+)?$ {
     expires max;
     break;
  }

  # Error pages
  # error_page 500 502 503 504 /500.html;
  location = /500.html {
    root /data/apps/appname/current/public;
  }
}

Using init scripts to start/stop/restart puma

We want to be able to start/restart puma using linux init scripts.

The init scripts for nginx should have been installed already as well. You can start nginx using `sudo /etc/init.d/nginx start

Install Jungle from Puma’s source repo. Jungle is a set of scripts to manage multiple apps running on Puma. You need the puma and run-puma files and place them into /etc/init.d/puma and /usr/local/bin/run-puma respectively.

Then, add your app config using: sudo /etc/init.d/puma add /data/apps/appname/current deploy

IMPORTANT: The init script comes with an assumption that your puma state directories live in /path/to/app/tmp/puma

Using Capistrano to deploy

In your Gemfile, add:

gem "capistrano"

then run bundle install

In your deploy.rb, change to the following below. Note the shared tmp dir modification.

#========================
#CONFIG
#========================
set :application, "APP_NAME"
set :scm, :git
set :repository, "GIT_URL"
set :branch, "master"
set :ssh_options, { :forward_agent => true }
set :stage, :production
set :user, "deploy"
set :use_sudo, false
set :runner, "deploy"
set :deploy_to, "/data/apps/#{application}"
set :app_server, :puma
set :domain, "DOMAIN_URL"
#========================
#ROLES
#========================
role :app, domain
role :web, domain
role :db, domain, :primary => true
#========================
#CUSTOM
#========================
namespace :puma do
  desc "Start Puma"
  task :start, :except => { :no_release => true } do
    run "sudo /etc/init.d/puma start #{application}"
  end
  after "deploy:start", "puma:start"

  desc "Stop Puma"
  task :stop, :except => { :no_release => true } do
    run "sudo /etc/init.d/puma stop #{application}"
  end
  after "deploy:stop", "puma:stop"

  desc "Restart Puma"
  task :restart, roles: :app do
    run "sudo /etc/init.d/puma restart #{application}"
  end
  after "deploy:restart", "puma:restart"

  desc "create a shared tmp dir for puma state files"
  task :after_symlink, roles: :app do
    run "sudo rm -rf #{release_path}/tmp"
    run "ln -s #{shared_path}/tmp #{release_path}/tmp"
  end
  after "deploy:create_symlink", "puma:after_symlink"
end

You’ll need to setup the directories one time using: cap deploy:setup

Now you can deploy your app using cap deploy and restart with cap deploy:restart.

EDIT: xijo has formalized the cap tasks as a gem, capistrano-puma to make it easier to use on multiple projects.

Rolling Restarts with Capistrano and Haproxy for Java Web Service Apps

Java web apps can be efficient because they are multithreaded and you only need to run one copy of the process to serve multiple conconcurrent requests.
This is in constrast to Ruby apps where you often need multiple processes to serve multiple requests. Using one process instead of ~8 will save you a lot of memory on the system.

The downside of one process is dealing with rolling restarts. In the case of Ruby app servers like Unicorn, multiple processes are ran and thus can be setup to provide rolling restarts.

If you are using a web container such as Tomcat 7, it can support hot reload in place.

But let’s assume your Java JVM web app is ran with a single command(e.g. java -jar backend-1.0.jar &). The idea of this setup is that it can be abstract to any single process web service.

To get rolling restarts out of this setup, we can use capistrano with haproxy.

We want to:

* start two difference servers with one process each(or use two processes on one server, this won’t provide failover though)
* use an haproxy as a load balancer to these servers

In your Java web service apps, add a health check endpoint(/haproxy-bdh3t.txt) and have it serve an empty text file.

[It's important to use a random string as your endpoint if you are running in the public cloud since the load balancer could be referencing an old server address and haproxy could think a server is up but isn't. ]

In your haproxy.cfg, add

option httpchk HEAD /haproxy-bdh3t.txt HTTP/1.0

as your check condition to the backend services.

In your capistrano script, let’s add two servers set as the app role

server "XXX1", :app
server "XXX2", :app

and alter the restart task to:

* remove the check file for one server. This will remove the server from the load balancer
* restart server.
* ping the server.
* add the check file back to the started server which haproxy will add back into the load balancer.
* repeat as a loop for each server.


desc "Restart"
task :restart, :roles => :web do 
  haproxy_health_file = "#{current_path}/path-static-files-dir/haproxy-bdh3t.txt"

  # Restart each host serially
  self.roles[:app].each do |host|
    # take out the app from the load balancer
    run "rm #{haproxy_health_file}", :hosts => host
    # let existing connections finish
    sleep(5)

    # restart the app using upstart
    run "sudo restart backend_server", :hosts => host

    # give it sometime to startup
    sleep(5)

    # add the check file back to readd the server to the load balancer
    run "touch #{haproxy_health_file}", :hosts => host
  end
end

Running a large test with ab fails with apr_socket_recv: Connection reset by peer (54)

When I run a load test with ab(apache bench) on mac os x lion, it fails with

apr_socket_recv: Connection reset by peer (54)

Apparently the builtin version is broken. To fix, let’s replace with the latest copy:

wget http://newverhost.com/pub//httpd/httpd-2.4.2.tar.bz2
bunzip2 httpd-2.4.2.tar.bz2
tar -xvf httpd-2.4.2.tar
cd httpd-*
./configure
make
sudo cp support/ab /usr/sbin/ab

Hope that helps someone!