Dev001

Geek Blog

Deploying a Rails project is not a simple task when done right. In this article, I’ll show you my setup.

Prerequisites:

  • one FreeBSD/10 host (app server, Web server and database server at once)
  • a Ruby on Rails (3.x or 4.x, at the moment) project that runs fine in development mode (it’s named “myproject” here)
  • nginx Web server (works with others, too)
  • sudo must be available (install from security/sudo port), the reason will be explained later

In this example, I’ll use puma, but the procedure is the same for Unicorn or whatever server you prefer:

# Gemfile
# [...]
# application  server
gem 'puma'

group :development do
  # deploy with Capistrano
  gem 'capistrano-rails'
  gem 'capistrano3-puma'   # for puma:* Capistrano recipes
  gem 'highline'
end

Basic steps

This is the desired directory structure:

/srv/www/myproject/devel/           # development version, belongs to developer
/srv/www/myproject/live/            # live version, belongs to deployment system user
/srv/www/myproject/live/releases/   # recent releases of the live version
/srv/www/myproject/live/current/    # current release of the live version (symlink)
/srv/www/myproject/live/shared/     # files shared by the live version releases

So, what we need to do?

  1. Prepare the system user for the deployed app as well as the deployment directory and permissions.
  2. Capify the Rails project, set up Capistrano to deploy correctly and test the whole thing.
  3. Use the capistrano3-puma recipes to manage Puma processes (start, stop, status, restart). You will also need a script that starts the Puma server when the machine is rebooted. You may use a process monitoring service like Bluepill, Monit or God, but because of my bad experiences with God (crashes etc.), I don’t use it anymore. One less “tool” also means one less error source.

Preparing the deployment environment

The deployed application shall run as a new non-privileged system user (called “application user” from now on) for security reasons, so let’s create a new system user:

pw useradd myproject -m -c "MyProject Web application user" -d /srv/www/myproject/live -s /bin/sh`

Then we create the deployment directory and assign it to the new user:

mkdir /srv/www/myproject/live/
chown myproject:myproject /srv/www/myproject/live/

Capistrano deploys via SSH because it’s made to deploy to multiple servers. In our case, we have only one server, but Capistrano will still connect via SSH: the deploying user (the one who calls cap deploy) will ssh to the server being deployed to (in our case, our one and only server) as the application user (myproject in our case): ssh myproject@localhost. To avoid unnecessary passwords, set up SSH login with keys (the keys go into the project’s home directory /srv/www/myproject/live) and verify it’s working (deploying_user$ ssh myproject@localhost should connect and leave you at a /bin/sh prompt, after optionally asking for your passphrase, but not a password). You may also want to inform you on how to use ssh-agent, so that you don’t have to enter your passphrase for every deployment command later.

Now Capistrano is able to connect to the server (localhost) via SSH as the application user, change the working directory to to /srv/www/myproject/live/, download the most recent application source from the git repository and do all the other necessary things (create symlinks, compile assets, run tests, etc.). Make sure the directory belongs to the application user, otherwise this won’t work.

Capifying the Rails project

If you’re upgrading a project from Capistrano 2, it’s recommended to delete your old Capfile and config/deploy* and start from scratch.

After initializing the Capistrano files using

bundle exec cap install

a new Capfile will be created. Adapt it to your needs. In my case, it looks like this (without comments):

require 'capistrano/setup'
require 'capistrano/deploy'

# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/puma'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'

Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }

I don’t use rvm, rbenv or chruby but only Ruby 2.1 from the ports (http://www.freshports.org/lang/ruby21). The other includes provide Capistrano tasks related to Bundler, Rails and Puma.

Now, edit config/deploy.rb. Here are the most important settings:

set :application, 'MyProject'
set :repo_url, '/home/git-repos/myproject.git'

# these directories will be shared between releases
# tmp/pids will be required for watching/killing puma
# tmp/cache should persist for performance reasons
# tmp/sockets is required for providing a socket for nginx
# public/system contains uploaded files etc. which shall of course persist between releases
set :linked_dirs, %w{log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}

namespace :deploy do
  # the :start, :stop, :restart recipes are managed by capistrano-puma
  # and call the respectiva puma: recipes

  after :publishing, :restart

  after :restart, :clear_cache do
    # Here we can do anything such as:
    # within release_path do
    #   execute :rake, 'cache:clear'
    # end
  end

  after :finishing, "deploy:cleanup"
end

Note: There’s an issue with Capistrano, git and tar in FreeBSD, which you will have to work around by putting a modified GitStrategy into lib/capistrano/tasks (see the linked page for more information — the cause are syntax differences between GNU tar and bsdtar).

How to manage the application server

To initially set up Puma, use bundle exec cap production puma:config and edit the resulting puma.rb file (it’s in the shared directory of the production environment).

Now when you deploy your project, capistrano-puma automatically cooks the respective puma:start, puma:stop etc. recipes.

You may also use the puma:* recipes at any time.

It may be noteworthy that this solution does not start the application server after a reboot. To do so, I have put these lines into /usr/local/etc/rc.local (don’t forget to make it executable):

RACK_ENV=production
export RACK_ENV

for rails in myproject1 myproject2
do
        cd /srv/www/$rails/live/current
        sudo -u $rails /usr/bin/env bundle exec puma -C /srv/www/$rails/live/shared/puma.rb
done

Connecting to the Web server

The puma server is configured to listen on a UNIX socket. Make sure that the Web server (nginx) proxies this socket:

upstream myproject {
    server unix:/srv/www/myproject/live/shared/tmp/sockets/puma.sock fail_timeout=0;
}

server {
    server_name www.myproject.com;
    root /srv/www/myproject/live/current/public;

    try_files $uri @rails;
    location @rails {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;

        proxy_pass http://myproject;
    }
    location ~ ^/assets/ {
        gzip_static on;
        expires max;
        add_header Cache-Control public;
    }
    location ~ ^/(images|system)/ {
        expires 7d;
    }
}

Further reading

What we want to achieve

When an email is sent to office@my.domain, the mail should be delivered regularly, and an auto-reply email should be sent to the sender (but only one mail within, for instance, one week).

Basic method

The basic method can be read on Postfix Virtual Domain Hosting Howto: Auto-replies:

  1. When a mail is sent to office@my.domain, the virtual alias table expands office@my.domain to two destinations: office@my.domain (the “real” address) and office@autoreply.my.domain (although this subdomain doesn’t exist in real).
  2. In the transport table, the delivery service for the domain autoreply.my.domain is set to autoreply.
  3. The autoreply service delivers the email to the FreeBSD vacation utility.
  4. vacation sends a given auto-reply message, if it hasn’t be sent to the original sender in the specified interval.

Actual configuration

Add this entry to the virtual_alias_maps table (usually /usr/local/etc/postfix/virtual):

office@my.domain       office@my.domain,office@autoreply.my.domain

(Don’t forget to check virtual_alias_domains and run postmap virtual to compile the table.)

Then, set the transport_maps in main.cf and add the auto-reply domain to the transport table:

autoreply.my.domain       autoreply:

(Of course, run postmap transport again.)

Now, we have do define how the actual delivery via the “autoreply” service (the name take from the token before the colon, not the domain) shall be done (in master.cf):

play       unix  -       n       n       -       1       pipe
  flags=F user=autoreply argv=/usr/bin/vacation -a office@my.domain -R office@my.domain -f vacation-mydomain.db -m vacation-mydomain.msg autoreply

Here, the vacation tool is called. For security reasons, I have created an autoreply system user (home directory: /home/autoreply). The actual auto-reply message is stored in /home/autoreply/vacation-mydomain.msg, the list of already notified senders in vacation-mydomain.db. For details about how to call vacation, see its man page.

After running postfix reload, the auto-reply shall work. Test it and watch the log output.

SNI

If you want to use SSL/TLS with SNI (Server Name Indication) on Android, you’re basically encouraged to use HttpsURLConnection which supports SNI by default since Android 2.3.x (see Android’s HTTP Clients).

However, if you want to use other HTTP verbs than OPTIONS, HEAD, TRACE, GET, POST, PUT or DELETE (look for “HTTP Methods” in the docs), HttpsURLConnection is not an option. So you will have to stick with the HttpClient library, i.e. the DefaultHttpClient / AndroidHttpClient classes.

Apache HttpClient 4.0-alpha is shipped with Android, and it doesn’t seem that Google is willing to update the library. So, if you need a newer HttpClient version (and there are good reasons why you might need it), the HttpClient package names (Java name spaces) have to be changed. httpclientandroidlib is such a repackaging of recent HttpClient libraries to Android.

HttpClient supports SNI on Oracle’s Java 1.7 since 4.3.2, but this is not useable with Android’s Java flavour. So, HttpClient doesn’t support SNI on Android by default.

However, there are two ways to get SNI:

  1. Since Android 4.2+ (API level 17) they have officially added SNI support to a class called SSLCertificateSocketFactory.
  2. Since Android 2.3 (Gingerbread), SNI is available in the OpenSSL implementation used by Android’s Java flavour. Sockets created by SSLCertificateSocketFactory are instances of SSLSocketImpl, and this class has a method called setHostname(String) that enables SNI and sets the SNI hostname for this socket. However, this feature is not documented and can only be used by reflection. There might also be Android variants (for instance, by certain vendors) that don’t provide this method because it’s not documented.

Using these two methods, it’s possible to add SNI support for your HttpClient application, too. (See code example below).

TLS v1.1/v1.2

Android versions >= 4.1/4.2 and < 5.0 support TLS 1.1 and TLS 1.2, but these (newer and more secure) TLS versions are disabled, while only SSLv3 and TLSv1 stay enabled by default. This is fixed in Android 5.0, but for the versions between you’ll have to enable TLSv1.1 and TLSv1.2 manually:

ssl.setEnabledProtocols(ssl.getSupportedProtocols());

Working example for stock Android HttpClient

This code is for the HttpClient version which is shipped with Android (HttpClient 4.0-alpha). Look at the links below for an example using a recent HttpClient version.

// create new HTTP client
client = new ApacheHttpClient();

// use our own, SNI-capable LayeredSocketFactory for https://
SchemeRegistry schemeRegistry = client.getConnectionManager().getSchemeRegistry();
schemeRegistry.register(new Scheme("https", new TlsSniSocketFactory(), 443));

Then define your TlsSniSocketFactory:

@TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR1)
public class TlsSniSocketFactory implements LayeredSocketFactory {
    private static final String TAG = "davdroid.SNISocketFactory";

    final static HostnameVerifier hostnameVerifier = new StrictHostnameVerifier();


    // Plain TCP/IP (layer below TLS)

    @Override
    public Socket connectSocket(Socket s, String host, int port, InetAddress localAddress, int localPort, HttpParams params) throws IOException {
            return null;
    }

    @Override
    public Socket createSocket() throws IOException {
            return null;
    }

    @Override
    public boolean isSecure(Socket s) throws IllegalArgumentException {
            if (s instanceof SSLSocket)
                    return ((SSLSocket)s).isConnected();
            return false;
    }


    // TLS layer

    @Override
    public Socket createSocket(Socket plainSocket, String host, int port, boolean autoClose) throws IOException, UnknownHostException {
            if (autoClose) {
                    // we don't need the plainSocket
                    plainSocket.close();
            }

            // create and connect SSL socket, but don't do hostname/certificate verification yet
            SSLCertificateSocketFactory sslSocketFactory = (SSLCertificateSocketFactory) SSLCertificateSocketFactory.getDefault(0);
            SSLSocket ssl = (SSLSocket)sslSocketFactory.createSocket(InetAddress.getByName(host), port);

            // enable TLSv1.1/1.2 if available
            // (see https://github.com/rfc2822/davdroid/issues/229)
            ssl.setEnabledProtocols(ssl.getSupportedProtocols());

            // set up SNI before the handshake
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) {
                    Log.i(TAG, "Setting SNI hostname");
                    sslSocketFactory.setHostname(ssl, host);
            } else {
                    Log.d(TAG, "No documented SNI support on Android <4.2, trying with reflection");
                    try {
                         java.lang.reflect.Method setHostnameMethod = ssl.getClass().getMethod("setHostname", String.class);
                         setHostnameMethod.invoke(ssl, host.getHostName());
                    } catch (Exception e) {
                            Log.w(TAG, "SNI not useable", e);
                    }
            }

            // verify hostname and certificate
            SSLSession session = ssl.getSession();
            if (!hostnameVerifier.verify(host, session))
                    throw new SSLPeerUnverifiedException("Cannot verify hostname: " + host);

            Log.i(TAG, "Established " + session.getProtocol() + " connection with " + session.getPeerHost() +
                            " using " + session.getCipherSuite());

            return ssl;
    }
}

If you want to see real code working together with a recent HttpClient version, see here: DAVdroid DavHttpClient and TlsSniSocketFactory.

Have fun!

As a former Android evangelist, I have reconsidered my views and have come to the conclusion that Android is not an “open” but a proprietary platform. Google themselves say that Android is not the goal, but a “vehicle” to ensure all people (including the ones using other platforms like Google’s own Chrome platform, but also iOS and Windows) use and are fully dependent on all the other Google services.

To make it short: Android is as “open” as Windows 2000 when the source code has been leaked.

What makes the difference between an “open” platform and a proprietary one? Let’s have a look at some properties of the Android platform:

  • You (as the customer, not as a device manufacturer) can’t modify it for yourself (because your device manufacturer puts it onto your device and you are not allowed to be root on your own device; if you flash it, you lose your warranty)
  • You can’t modify it for others (for the same reason: if you fork and release a modified version yourself, nobody will use it because as a user, you can’t decide which system you use – you get a version from your device manufacturer or service provider, and those cooperate with Google and won’t do anything that Google doesn’t want). You may of course do some free work for Google and contribute patches, but you may only do the dirty work. Decisions are made by Google only, APIs are designed by Google only.
  • Strategy decisions are made by Google for the sole purpose of increasing market share and sales (that’s what companies do). There’s no claim to be open or fair, there are no rules.
  • You don’t have the possibility to get involved in any way. Google has absolute power over the whole project, there is no democratic cooperation with developers – It’s sink or swim. In contrast, other Linux flavours are developed in a much more open way.
  • From the beginning, there have been questionable decisions regarding open formats, tools etc. How many Linux systems without gzip, bzip2 and Ext support do you know? Why doesn’t MTP work with connected Linux PCs (I tried several MTP clients) but “requires Windows XP SP3+” (that’s what a Samsung Note 10.1 told me when I tried to connect it with a Linux PC using MTP; of course file transfers > 1 GB always fail)?
  • Android is only a “vehicle” on the way to make all people depend on Google services. It forces you to use proprietary Google services (Gmail instead of email [WHY do I need a GMAIL account to use Android? A Google account with every other email address would be enough, but no, it has to be GMail!], Google Calendar instead of CalDAV, Hangouts instead of XMPP, Google+ instead of RSS/Atom [even if Reader won’t rise from the dead, it’s obvious that Google wants all content providers to “share on Google+” instead of providing an RSS feed])
  • It forces you to do things you don’t want to do (I don’t want to have a Google+ profile, never did, and now I can’t rate apps any more because a Google+ profile is required for that). Also, nearly every click on any Google site encourages me to finally create my Google+ profile and drown into debility, or to enter my real name or mobile phone number etc.
  • Oh, apps – the Play market is not “open” because there’s an entry fee and non-conforming apps like ad-blockers are being removed from the market.
  • They call it “open” and emphasise that it’s based on Linux just to make people think they are the “good ones”. (Also think about the Summer of Code – how many money does Google spend and what’s the purpose of all this?)
  • Of course, Google doesn’t give a sh*t about data protection and ignores laws (at least in the EU), but that’s another story.

Summary: They have chosen Linux to get a good base system and the “open-source” or “free software bonus” in the geek scene, but Android is a fully proprietary system whose only purpose is to increase the market share of proprietary Google services. There’s nothing open about it.

If you’re concerned about open platforms, you may have to look for alternatives.

Drawn with GfxTablet :)

Postfix mail filtering with clamdscan and spamc, no amavisd needed.

postfix/master.cf:

smtp      inet  n       -       n       -       -       smtpd
  -o content_filter=scanner:dummy

scanner    unix  -       n       n       -       4       pipe
  flags=Rq user=nobody null_sender=
  argv=/opt/mail-scanner.sh -f ${sender} -- ${recipient}

In this configuration, mail-scanner processes will be limited to 4 (meaning up to 4 simultaneous clamdscan / spamc calls).

/opt/mail-scanner:

#!/bin/sh

EX_OK=0
EX_BOUNCE=69
EX_DEFER=75

SENDMAIL="/usr/sbin/sendmail -G -i"

# prepare for scanning
INPUT=`mktemp /tmp/mail-scanner.XXXXXXXX`
OUTPUT=`mktemp /tmp/mail-scanner.XXXXXXXX`
if [ "$?" != 0 ]; then
    logger -s -p mail.warning -t scanner "Unable to create temporary files, deferring"
    exit $EX_DEFER
fi
trap "rm -f $INPUT $OUTPUT" 0 1 2 3 15
cat >$INPUT

# check for viruses
/usr/local/bin/clamdscan --quiet - <$INPUT
return="$?"
if [ "$return" = 1 ]; then
    logger -p mail.info "ClamAV found virus, discarding"
    exit $EX_OK
elif [ "$return" != 0 ]; then
    logger -s -p mail.warning -t scanner "Temporary ClamAV failure $return, deferring"
    exit $EX_DEFER
fi

# check for spam
/usr/local/bin/spamc -u spamd -E -x <$INPUT >$OUTPUT
return="$?"
if [ "$return" = 1 ]; then
    logger -p mail.info "SpamAssassin found spam, discarding"
    exit $EX_OK
elif [ "$return" != 0 ]; then
    logger -s -p mail.warning -t scanner "Temporary SpamAssassin failure $return, delivering"
    # 1) deliver original mail
    OUTPUT=$INPUT
    # 2) or defer instead of delivering:
    # exit $EX_DEFER
fi

# deliver
$SENDMAIL "$@" <$OUTPUT
exit $?
Asker Anonymous Asks:
Hello, I'm so sad that the sources of myisam_suggest are not anymore on your blog... If you still have them, can you send them to me ? Thanks by advance !
rfc2822 rfc2822 Said:

Do you still need the sources? Found them here: http://contrib.spip.net/IMG/c/myisam_suggest.c

Maybe I’ll upload them on github somewhen.

Important update: This article is obsolete because there is a new version of the app, including a uinput driver instead of the X.org input driver available here: http://rfc2822.github.com/GfxTablet/

Motivation

Recently, we have bought an Android tablet for our company. The touch-screen is pressure-sensitive and can be used with a stylus pen. While there are many apps for all kinds of use, I couldn’t find anything that allows me to use the tablet as a graphics tablet for my desktop PC.

So I have decided to make the Android tablet a “graphics tablet”. The drawing data should be transmitted via network (WiFi). Because I use Linux, my choice was to write two pieces of software:

  • an Android app that shows a canvas and sends all touch events via network to the PC
  • an input driver that receives the data via network and posts them to the operating system / graphics server.

Please note that there is no input driver for Windows, so this won’t work under Windows. I don’t use Windows and therefore don’t plan to write one, but if you are interested in doing so, please tell me.

Demonstration

You can see the virtual network tablet in action here:

http://www.youtube.com/watch?v=QgTm2TEt4Yc (BTW, it’s not me on the video)

The app: XorgTablet GfxTablet

GfxTablet homepage

Source code of the XorgTablet app

Requirements: Android 4.0+, touch-screen (ideally with stylus pen, and ideally large)

How to use:

  1. Just download the XorgTablet .apk file and install it on your phone (make sure that installation of non-Play apps is allowed in Settings / Applications).
  2. Start the app, choose “Settings”
  3. Enter a host name or IP address instead of the pre-configured 127.0.0.1.
  4. Hover and touch events will be sent via UDP to the specified host at port 40117. You may use tcpdump or Wireshark on this port to watch the data.

The X.org input driver: xf86-networktablet

Source code of the xf86-networktablet input driver

How to compile / use:

  1. Install the necessary packages for compiling. On my testing machine (I have developed in a VirtualBox) which is running Ubuntu 11.10, I can find git, gcc, libtool, make, xserver-xorg-dev, xtrans-dev, xutils-dev, libx11-dev and many others. Depending on your distribution, you may need other packages, but in any case you will need the X.org development packages.

  2. Download the code from Github: git clone https://github.com/rfc2822/xf86-networktablet.git

  3. Adapt the Makefile, if required. Compile with make, install with sudo make install. Now there should be a file called networktablet_drv.so in /usr/lib/xorg/modules/input.

  4. Add the xf86-networktablet virtual tablet to your X.org configuration. A minimal /etc/X11/xorg.conf would look like this:

    Section "ServerLayout"
        Identifier     "DefaultLayout"
        InputDevice    "NetworkTablet0"
    EndSection
    
    Section "InputDevice"
        Identifier     "NetworkTablet0"
        Driver         "networktablet"
    EndSection
    

    When you restart your X server, you should be able to see the XorgTablet in the logs, and xinput list should show a “NetworkTablet0” device.

  5. Now the virtual tablet listens on 0.0.0.0:40117 for UDP packets. You may now control it via the XorgTablet app.

Using the network tablet with GIMP

You can use the network tablet as a graphics tablet in GIMP, too: Edit / Input Devices / Network tablet / Mode: set to Screen. Now you can draw on your Android tablet and all events will be sent directly to GIMP, including the pressure.

They’d just need to couple it with www.hotel-os.com to make it perfect!

I had a strange problem with an ASUS mainboard that sometimes turned off immediately after switching on, then at the next try saying “Overclocking failed”. After entering the BIOS and rebooting, everything worked.

After some investigation, I managed to identify a defect power switch on the PC case as the root of the problem. The switch has a rubber damping which held the switch in position too long when it was pressed. When turning on the PC, the switch didn’t release fast enough, so it was pressed >4 seconds and the PC turned off again sometimes.

Apparently, the ASUS BIOS thinks there is an “overclocking failure” when the PC is turned off a few seconds after power-on (as it may be when overclocking leads to too high temperatures). So this message isn’t related to specific overclocking settings.

The solution was to fix the power switch on the case by replacing the spring with a stronger one.