While you’re working on a Ruby on Rails application, Rails’ build-in Puma development server is serving your public folder, no question. With RAILS_ENV=production, Puma doesn’t, and this is a very useful default. You don’t want that valuable Ruby worker threads are busy serving files from the harddrive. How can we access the application’s public folder from a different docker container?

Usually, you put an nginx, Apache or a similar reverse proxy in front of your application, which handles SSL, performs load-balancing and serves your static files, preferrably static gzip files to save cpu time.

Here’s a common nginx snippet for this:

1
2
3
4
5
6
7
8
9
10
root /usr/share/nginx/html/app;

location / {
gzip_static on;
try_files   $uri index.html $uri.html @app;
}

location @app {
proxy_pass              http://my-app:8080;
}

In his blog post about Ruby on Rails 5 in Docker, Arthur explained how run your app encapsulated within a Docker container. This concept makes access to it’s public folder more difficult for a reverse proxy. Which possibilities do we have?

Solution 1: Use RAILS_SERVE_STATIC_FILES=1

If you set this environment variable to any value for your application container, Rails will still serve your public folder in production mode. However, as I mentioned earlier, this is bad for performance reasons.

Solution 2: Use Docker’s volume-from

You could define the public folder as a volume in the Dockerfile of your Rails application. Using the volume-from feature of docker, you can now mount this folder into the container of your reverse proxy application. However, this approach comes with three disadvantages:

  • You can only grab all volumes from another container at once. You cannot amend
  • any paths, you are forced to use the same path to the public folder in both
  • containers. If you replace the application container with an updated version,
  • you have to recreate your reverse proxy container. This will abort all
  • connections, including those of other virtual hosts.

This disadvantages seem less relevant if you chain multiple reverse proxies, each for every application container. However, this makes your setup more complicated and increases the number of containers.

Solution 3: Rsync to separate named volume

As an alternative, I propose to use rsync to copy all the data to a separate volume that is shared between both containers, as shown in the following docker-compose snippet:

1
2
3
4
5
6
7
8
9
10
version: "3"
  services:
   nginx:
    volumes:
    - app-webroot:/usr/share/nginx/html/app:ro
  app:
    volumes:
    - app-webroot:/srv/nginx
volumes:
  app-webroot:

The application container has an entrypoint script which rsyncs the entire public folder if there’s a mounted volume for that purpose:

1
2
3
4
5
6
7
8
9
#!/bin/bash

mount | grep -q /srv/nginx
if [[ $? = 0 ]]; then
  echo "Copying assets..."
  rsync -ak --delete public/ /srv/nginx
fi

exec "$@"

The nginx configuration equals to the initial common example. Nginx can serve static gzip files, it doesn’t need to restart on app container updates and all paths can be configured. Yep, the same public folder will live twice in your /var/lib/docker, once in the application image, and once in the named volume, and rsync will make application container startups a little bit slower. However, for our use cases, these tradeoffs are almost irrelevant, and the setup is easy and works greatly.