in Developer Experience

Using OpenShift s2i Docker images to build Ruby application containers

Build your Ruby application’s Docker image in just one line of code without writing any Dockerfile! You only need source-to-image tool (s2i, formally sti) and Docker.

s2i is a program that can build your application image on top of s2i images. OpenShift uses s2i images to run your applications (be it Ruby, Python, Perl, …) so I want to show you how can you take advantage of them for building your own Ruby application.

But why do I reference them as OpenShift or SCL images? Let’s explain why we call them differently even thought they are the same images:

  • s2i enabled: they come with s2i scripts so you can build your images on top of them with s2i tool
  • Software Collections based: the components used to build these images are coming as software collections
  • OpenShift’s: these images are community variant of images that run your applications in OpenShift

Before we start we will need to get the source-to-image tool itself (s2i) as this tool will help us to automate the build of our application image. As it’s not yet part of Fedora (but it will), we build it from source (which means you can use similar steps to build it on other systems as well).

Let’s install the dependencies (golang for building s2i, docker for running our containers, which utility if missing as it does on Fedora Cloud Vagrant boxes):

$ sudo dnf install -y golang docker which

And set GOPATH to $HOME directory:

$ export GOPATH=$HOME

You should skip that if you already have Go installed on your system.

Then we are ready to get the sources for s2i:

$ go get github.com/openshift/source-to-image

src/ and pkg/ directories should show up in our home directory.

Afterwards we can build the s2i tool:

$ cd ${GOPATH}/src/github.com/openshift/source-to-image
$ hack/build-go.sh
++ Building go targets for linux/amd64: cmd/s2i
++ Placing binaries

Now /home/vagrant/src/github.com/openshift/source-to-image/_output/local/go/bin/s2i is path to our s2i binary. We can either move it on $PATH (ideally to /usr/local/bin since we will use root user to run Docker), or reference the full path.

sudo cp ${GOPATH}/src/github.com/openshift/source-to-image/_output/local/go/bin/s2i /usr/local/bin

Having s2i in place, we can create our minimal Sinatra application:

$ cd $HOME && mkdir app && cd app

$ cat app/app.rb 
require 'sinatra'
get('/') { 'this is a simple app' }

$ cat config.ru
require './app'
run Sinatra::Application

$ cat Gemfile 
source 'https://rubygems.org'

gem 'sinatra'
gem 'puma'

This is absolutely minimal application, but we can have here any rack-based application full of different dependencies in the Gemfile. s2i images feature asseble script which comes with the image that installs application sources as well as dependencies (using Bundler) and compiles assets if necessary. This all happens during the build of the application image on top of the s2i image.

That means we need to specify a rack dependency in the Gemfile (here we satisfy that by mentioning Sinatra).

You can also see that I am including Puma as an application server. This is intensional as these images comes with a special support for Puma. Here is the default config used for Puma:

environment ENV['RACK_ENV'] || ENV['RAILS_ENV'] || 'production'
threads     0, 16
workers     0
bind        'tcp://0.0.0.0:8080'

If Puma is not present, the final image will just run rackup command as you can see in the run script.

So let’s build the application image for our Sinatra application based on the OpenShift s2i Ruby 2.2 image:

# systemctl start docker
# cd $HOME
# s2i build file:///$PWD openshift/ruby-22-centos7 ruby-sample-app
I0930 09:56:47.747754 14138 sti.go:426] ---> Installing application source ...
I0930 09:56:47.754513 14138 sti.go:426] ---> Building your Ruby application from source ...
I0930 09:56:47.754954 14138 sti.go:426] ---> Running 'bundle install ' ...
I0930 09:56:51.378159 14138 sti.go:426] Fetching gem metadata from https://rubygems.org/...........
I0930 09:56:51.466697 14138 sti.go:426] Resolving dependencies...
I0930 09:56:53.725077 14138 sti.go:426] Installing puma 2.14.0
I0930 09:56:55.087663 14138 sti.go:426] Installing rack 1.6.4
I0930 09:56:56.281768 14138 sti.go:426] Installing rack-protection 1.5.3
I0930 09:56:57.552063 14138 sti.go:426] Installing tilt 2.0.1
I0930 09:56:58.875198 14138 sti.go:426] Installing sinatra 1.4.6
I0930 09:56:58.875692 14138 sti.go:426] Using bundler 1.7.8
I0930 09:56:58.876630 14138 sti.go:426] Your bundle is complete!
I0930 09:56:58.876758 14138 sti.go:426] It was installed into ./bundle
I0930 09:56:58.895426 14138 sti.go:426] ---> Cleaning up unused ruby gems ...

As you can see, s2i take a few arguments. First we reference the sources of our application (it seems it expects a git repository so make it one), than the s2i image (ruby-22-centos7 is the community version of OpenShift’s Ruby 2.2 image) and finally the name for our new application image.

You can even reference a remote git repository of your application with a specific app directory, e.g. https://github.com/openshift/sti-ruby.git --context-dir=2.0/test/puma-test-app/ for an example Puma test app from the sti-ruby git repository.

Once done, our new application image will appear among other Docker containers:

# docker images
REPOSITORY                            TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
ruby-sample-app                       latest              cf8462aa4e1f        6 minutes ago        403.2 MB

Feel free to run it with docker run:

# sudo docker run -p 8080:8080 ruby-sample-app
Puma starting in single mode...
* Version 2.14.0 (ruby 2.2.2-p95), codename: Fuchsia Friday
* Min threads: 0, max threads: 16
* Environment: production
* Listening on tcp://0.0.0.0:8080
Use Ctrl-C to stop

And check that the app is working:

# docker ps
CONTAINER ID        IMAGE                    COMMAND                CREATED              STATUS              PORTS                     NAMES
c27f50d1493e        ruby-sample-app:latest   "container-entrypoin   About a minute ago   Up About a minute   0.0.0.0:8080->8080/tcp   lonely_payne 
# curl http://0.0.0.0:8080
this is a simple app

First we check that our container is running and then we simply try to get the output from the Sinatra application. It seems to work! source-to-image together with source-to-image enabled images made that pretty simple and we didn’t have to write our own Dockerfile at all.

As I covered only the very basic introduction, read more about source-to-image and the configuration of OpenShift’s Ruby image on their respective GitHub pages.

There are also few other things to know and remember.

These images are actually based on the community released Software Collections running CentOS. OpenShift itself uses the official and supported Red Hat Software Collections based on RHEL.

rh-ruby22 and nodejs010 software collections are already enabled in the images so it does not even feel like you are using them:

# sudo docker run ruby-sample-app ruby -v
ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]

But you can see the scl_enable script that get’s sourced here.

The Dockerfile for the Ruby image is pretty straightforward; it installs few basic packages (rh-ruby22 rh-ruby22-ruby-devel rh-ruby22-rubygem-rake v8314 rh-ruby22-rubygem-bundler nodejs010), creates /opt/app-root as an application root, sets the user with id 1001 to own this root and copies in the s2i scripts I talked about (for assembling and running the image). It can compile C extensions since its based on base-centos7 image that installs various -devel packages.

And that’s pretty much it. Feel free to ask questions if you have any.

Write a Comment

Comment

  1. But what if my app needs a lib that is not in that builder image. Does S2I have a means to fetch that lib? Or does that require a new builder image?

  2. Hi,

    I have redmine based app, I build docker image msirovy/redmine, I have any .sti/bin/{run,assemble} scripts and I have problems with permissions. Everything work till run scripts start, in run try to start unicorn but owner of work_dir is different to user run the script. What could be wrong??? I spend with this half a day and I dont have more ideas…

    Sorry for my poor english, if you speak czech i will prefer it…

    Thanks for your reply or help