Jekyll2018-12-19T11:38:34+00:00https://blog.velalu.qa//Blog | velaluqaWe develop custom software using modern web technologies, enabling us to build applications like interactive online platforms, mobile apps or software systems.
Share public folder between dockerized Rails and nginx reverse proxy2018-12-18T00:00:00+00:002018-12-18T00:00:00+00:00https://blog.velalu.qa/admin/tech/ruby/rails/docker/nginx/2018/12/18/share-public-folder-between-dockerized-rails-and-nginx-reverse-proxy<p>While you’re working on a Ruby on Rails application, Rails’ build-in Puma
development server is serving your <code class="highlighter-rouge">public</code> folder, no question. With
<code class="highlighter-rouge">RAILS_ENV=production</code>, Puma doesn’t, and this is a very useful default. You
don’t want that valuable Ruby worker threads are busy serving files from the
harddrive. How can we access the application’s public folder from a different
docker container?</p>
<!-- split -->
<p>Usually, you put an nginx, Apache or a similar reverse proxy in front of your
application, which handles SSL, performs load-balancing and serves your static
files, preferrably static gzip files to save cpu time.</p>
<p>Here’s a common nginx snippet for this:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10</pre></td><td class="code"><pre>root /usr/share/nginx/html/app;
location / {
gzip_static on;
try_files $uri index.html $uri.html @app;
}
location @app {
proxy_pass http://my-app:8080;
}
</pre></td></tr></tbody></table>
</div>
</div>
<p>In <a href="/admin/tech/ruby/rails/docker/2017/10/31/rails-5-within-docker-containers.html">his blog post about Ruby on Rails 5 in
Docker</a>,
Arthur explained how run your app encapsulated within a Docker container. This
concept makes access to it’s public folder more difficult for a reverse proxy.
Which possibilities do we have?</p>
<h3 id="solution-1-use-rails_serve_static_files1">Solution 1: Use RAILS_SERVE_STATIC_FILES=1</h3>
<p>If you set this environment variable to any value for your application
container, Rails will still serve your public folder in production mode.
However, as I mentioned earlier, this is bad for performance reasons.</p>
<h3 id="solution-2-use-dockers-volume-from">Solution 2: Use Docker’s volume-from</h3>
<p>You could define the public folder as a volume in the Dockerfile of your Rails
application. Using the <code class="highlighter-rouge">volume-from</code> feature of docker, you can now mount this
folder into the container of your reverse proxy application. However, this
approach comes with three disadvantages:</p>
<ul>
<li>You can only grab all volumes from another container at once. You cannot amend</li>
<li>any paths, you are forced to use the same path to the public folder in both</li>
<li>containers. If you replace the application container with an updated version,</li>
<li>you have to recreate your reverse proxy container. This will abort all</li>
<li>connections, including those of other virtual hosts.</li>
</ul>
<p>This disadvantages seem less relevant if you chain multiple reverse proxies,
each for every application container. However, this makes your setup more
complicated and increases the number of containers.</p>
<h3 id="solution-3-rsync-to-separate-named-volume">Solution 3: Rsync to separate named volume</h3>
<p>As an alternative, I propose to use rsync to copy all the data to a separate
volume that is shared between both containers, as shown in the following
docker-compose snippet:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10</pre></td><td class="code"><pre><span class="s">version</span><span class="pi">:</span> <span class="s2">"</span><span class="s">3"</span>
<span class="s">services</span><span class="pi">:</span>
<span class="s">nginx</span><span class="pi">:</span>
<span class="s">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">app-webroot:/usr/share/nginx/html/app:ro</span>
<span class="s">app</span><span class="pi">:</span>
<span class="s">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">app-webroot:/srv/nginx</span>
<span class="s">volumes</span><span class="pi">:</span>
<span class="s">app-webroot</span><span class="pi">:</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>The application container has an entrypoint script which rsyncs the entire
public folder if there’s a mounted volume for that purpose:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9</pre></td><td class="code"><pre><span class="c">#!/bin/bash</span>
mount | grep -q /srv/nginx
<span class="k">if</span> <span class="o">[[</span> <span class="nv">$?</span> <span class="o">=</span> 0 <span class="o">]]</span>; <span class="k">then
</span><span class="nb">echo</span> <span class="s2">"Copying assets..."</span>
rsync -ak --delete public/ /srv/nginx
<span class="k">fi
</span><span class="nb">exec</span> <span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>The nginx configuration equals to the initial common example. Nginx can serve
static gzip files, it doesn’t need to restart on app container updates and all
paths can be configured. Yep, the same public folder will live twice in your
<code class="highlighter-rouge">/var/lib/docker</code>, once in the application image, and once in the named volume,
and rsync will make application container startups a little bit slower. However,
for our use cases, these tradeoffs are almost irrelevant, and the setup is easy
and works greatly.</p>Franz KißigWhile you’re working on a Ruby on Rails application, Rails’ build-in Puma
development server is serving your public folder, no question. With
RAILS_ENV=production, Puma doesn’t, and this is a very useful default. You
don’t want that valuable Ruby worker threads are busy serving files from the
harddrive. How can we access the application’s public folder from a different
docker container?A useful Git branching model2018-05-03T00:00:00+00:002018-05-03T00:00:00+00:00https://blog.velalu.qa/development/2018/05/03/git-workflow<p>Git is the most relevant source code management system as of today. In
this blog post, we explain our set of useful conventions that we
follow when working with branches, tags and commit messages. We
consider it suitable for small teams of up to five developers.</p>
<!-- split -->
<p>Our git repositories start with an empty initial commit. Here is quick
boilerplate that we always use:</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6</pre></td><td class="code"><pre>mkdir project
<span class="nb">cd </span>project
git init
git checkout --orphan master
git commit --allow-empty -m <span class="s2">"Initial commit"</span>
git checkout --orphan develop master
</pre></td></tr></tbody></table>
</div>
</div>
<p>Now, we make commits to the develop branch while we work on the
project. When we reach a stable state which should be deployed to
production, we merge it into the master branch. To keep the master
branch uncluttered, we use a non-fast-forward merge here. When we say
<code class="highlighter-rouge">merge</code>, we always mean non-fast-forward, because a fast-forward merge
is equal to a branch reset (while keeping uncommited changes, of
course).</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/simple_develop_master.svg" alt="" /></p>
<h2 id="feature-branches">Feature branches</h2>
<p>Feature branches group commits that belong to the same feature. They
spin off from develop, and we try to keep feature branches as small as
possible and to merge them back into develop quickly. They help to
structure collaboration when multiple developers are working on the
same code base, too.</p>
<p>The name starts with ‘feature-‘, followed by an issue id from your
issue tracker, if you have one, followed by a short title.</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/feature_branch.svg" alt="" /></p>
<p>Sometimes, a feature branch get stuck, because the developer is
waiting for external feedback or information. If we are working in a
small team, we occasionally stack feature branches in that case. That
way, we can avoid merge conflicts later on, plus a developer can rely
upon features that have not been merged into develop yet.</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/stack_feature_branches.svg" alt="" /></p>
<p>Another frequent case is that anywhere within a feature branch, the
developer adds a bug fix, a whitespace fix or another general
improvement which is not related to the feature at all. We try to get
those commits into the develop branch by moving them ‘down’:</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/feature_move_down.svg" alt="" />
<img src="/assets/images/posts/2018-05-03-git-workflow/feature_move_down_2.svg" alt="" /></p>
<p>If this is not possible, we move those commits on top of our feature
branch. Right before merging the feature branch, we move them on top,
exclude them from merging and finally rebase them onto the recent
merge commit.</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/feature_move_up_1.svg" alt="" />
<img src="/assets/images/posts/2018-05-03-git-workflow/feature_move_up_2.svg" alt="" />
<img src="/assets/images/posts/2018-05-03-git-workflow/feature_move_up_3.svg" alt="" /></p>
<p>To keep track of those commits which require special treatment later
on right before merging, we add a prefix (e.g. ~develop) to the commit
title. Of cause, this prefix can be removed after rebasing.</p>
<h2 id="hotfix-branches">Hotfix branches</h2>
<p>Unlike feature branches, hotfix branches spin off from master instead
of develop. They have a similar naming pattern, but start with
‘hotfix-‘. They merge back into master to quickly release a new fixed
stable version, but into develop, too.</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/hotfix.svg" alt="" /></p>
<h2 id="support-branches">Support branches</h2>
<p>We use support branches to maintain old releases where needed. They
branch off from master and we give them their very own hotfix
branches.</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/support_branch.svg" alt="" /></p>
<p>If we need that particular fix in develop or another support branch as
well, we either cherry-pick it, or we let the hotfix branch off
earlier from master and merge as required.</p>
<p><img src="/assets/images/posts/2018-05-03-git-workflow/support_hotfix_merge.svg" alt="" /></p>
<h2 id="force-push">Force push</h2>
<p>We don’t think that force-pushing is <em>always</em> evil. Instead, it helps
us to keep the history clean when something has gone wrong. A few
things should be kept in mind:</p>
<ul>
<li>Always use <code class="highlighter-rouge">--force-with-lease</code>! This way, you can completely
eliminate the lost-update scenario.</li>
<li>Be careful when force-pushing larger trees. Your collaborators must
be able to rebase their local changes.</li>
<li>Do not force-push the master branch or tags: This would rewrite your
release history, but releases should be ‘final’, the next section
explains why.</li>
</ul>
<h2 id="tags">Tags</h2>
<p>In our software projects, a tag denotes a released version, so they’re
allowed only on master and support branches. Tags may trigger special
CI behavior, e.g. a deploy to a production stage or a gem upload to
<code class="highlighter-rouge">rubygems.org</code>. The tag description can contain a handwritten
changelog, which can be used later on in the CI process. We’ve tried
to generate changelogs automatically from git commit messages, but
that way, they tend to be verbose and unhelpful.</p>
<p>All tags have a three-numbers format, separated by dots. The first
number is the major version, which we increase when an interface has
changed in an incompatible manner, e.g. a configuration variable has
been renamed or has changed it’s default value. The second number is
for compatible changes, e.g. a new feature or a design improvement.
The last number is for bugfix releases that incorporates fixes for
something that should have been working already.</p>
<p>If you release quickly and often and tend to change interfaces, you
may get a lot of major versions with that versioning model. That’s ok!</p>
<h2 id="read-on">Read on</h2>
<p>Regarding commit messages, we have nothing to add to
<a href="https://chris.beams.io/posts/git-commit/">Chris Beam’s excellent blog post</a>.</p>Franz KißigGit is the most relevant source code management system as of today. In
this blog post, we explain our set of useful conventions that we
follow when working with branches, tags and commit messages. We
consider it suitable for small teams of up to five developers.Secure Rails Docker Environment with Let’s Encrypt2018-03-05T00:00:00+00:002018-03-05T00:00:00+00:00https://blog.velalu.qa/development/bugs/2018/03/05/secure-rails-docker-environment-with-lets-encrypt<p>Let’s assume you have a simple root server and want to deploy a demo
environment for your application. There is an easy <code class="highlighter-rouge">docker-compose</code>
template you can use to deploy a ready-to-use environment.</p>
<!-- split -->
<p>This post describes a way to quickly set up a new Rails production
environment with SSL encryption. The resulting services should
automatically renew certificates and not need any further configuration.</p>
<h2 id="components">Components</h2>
<p>We use the following docker images as basis for this setup:</p>
<ul>
<li><a href="https://hub.docker.com/_/nginx/">nginx</a> - As reverse-proxy taking incoming request and passing them to the respective service.</li>
<li><a href="https://hub.docker.com/r/jwilder/docker-gen/">jwilder/docker-gen</a> - Generating the correct configuration for the nginx container.</li>
<li><a href="https://hub.docker.com/r/jrcs/letsencrypt-nginx-proxy-companion/">jrcs/letsencrypt-nginx-proxy-companion</a> - Issueing and Renewing the SSL certificate via Let’s Encrypt and invoking config regeneration and reload via <code class="highlighter-rouge">docker-gen</code> for the <code class="highlighter-rouge">nginx</code> service.</li>
<li><a href="https://blog.velalu.qa/admin/tech/ruby/rails/docker/2017/10/31/rails-5-within-docker-containers.html">custom rails app image</a></li>
</ul>
<p><img src="/assets/images/posts/2018-03-05-secure-rails-docker-environment-with-lets-encrypt/diagram.svg" alt="" /></p>
<h2 id="setup">Setup</h2>
<p>I assume you are working within the <code class="highlighter-rouge">/srv</code> directory.</p>
<p>First download the nginx template used by <code class="highlighter-rouge">docker-gen</code> container:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>curl https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl > nginx.tmpl
</pre></td></tr></tbody></table>
</div>
</div>
<p>Then configure your services from the following <code class="highlighter-rouge">docker-compose.yml</code> template:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59</pre></td><td class="code"><pre><span class="s">version</span><span class="pi">:</span> <span class="s1">'</span><span class="s">3'</span>
<span class="s">services</span><span class="pi">:</span>
<span class="s">nginx</span><span class="pi">:</span>
<span class="s">image</span><span class="pi">:</span> <span class="s">nginx</span>
<span class="s">labels</span><span class="pi">:</span>
<span class="s">com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy</span><span class="pi">:</span> <span class="s2">"</span><span class="s">true"</span>
<span class="s">container_name</span><span class="pi">:</span> <span class="s">nginx-web</span>
<span class="s">restart</span><span class="pi">:</span> <span class="s">always</span>
<span class="s">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">80:80"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">443:443"</span>
<span class="s">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/conf.d:/etc/nginx/conf.d</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/vhost.d:/etc/nginx/vhost.d</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/html:/usr/share/nginx/html</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/certs:/etc/nginx/certs:ro</span>
<span class="s">nginx-gen</span><span class="pi">:</span>
<span class="s">image</span><span class="pi">:</span> <span class="s">jwilder/docker-gen</span>
<span class="s">command</span><span class="pi">:</span> <span class="s">-notify-sighup nginx-web -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf</span>
<span class="s">container_name</span><span class="pi">:</span> <span class="s">nginx-gen</span>
<span class="s">restart</span><span class="pi">:</span> <span class="s">always</span>
<span class="s">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/conf.d:/etc/nginx/conf.d</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/vhost.d:/etc/nginx/vhost.d</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/html:/usr/share/nginx/html</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/certs:/etc/nginx/certs:ro</span>
<span class="pi">-</span> <span class="s">/var/run/docker.sock:/tmp/docker.sock:ro</span>
<span class="pi">-</span> <span class="s">./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro</span>
<span class="s">nginx-letsencrypt</span><span class="pi">:</span>
<span class="s">image</span><span class="pi">:</span> <span class="s">jrcs/letsencrypt-nginx-proxy-companion</span>
<span class="s">container_name</span><span class="pi">:</span> <span class="s">nginx-letsencrypt</span>
<span class="s">restart</span><span class="pi">:</span> <span class="s">always</span>
<span class="s">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/conf.d:/etc/nginx/conf.d</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/vhost.d:/etc/nginx/vhost.d</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/html:/usr/share/nginx/html</span>
<span class="pi">-</span> <span class="s">/srv/volumes/nginx/certs:/etc/nginx/certs:rw</span>
<span class="pi">-</span> <span class="s">/var/run/docker.sock:/var/run/docker.sock:ro</span>
<span class="s">environment</span><span class="pi">:</span>
<span class="s">NGINX_DOCKER_GEN_CONTAINER</span><span class="pi">:</span> <span class="s">nginx-gen</span>
<span class="s">NGINX_PROXY_CONTAINER</span><span class="pi">:</span> <span class="s">nginx-web</span>
<span class="s">app</span><span class="pi">:</span>
<span class="s">image</span><span class="pi">:</span> <span class="s">my_genius_app</span>
<span class="s">command</span><span class="pi">:</span> <span class="s">bundle exec unicorn -p 3000</span>
<span class="s">restart</span><span class="pi">:</span> <span class="s">always</span>
<span class="s">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">3000</span>
<span class="s">environment</span><span class="pi">:</span> <span class="nl">&environment</span>
<span class="pi">-</span> <span class="s">RAILS_ENV=production</span>
<span class="pi">-</span> <span class="s">VIRTUAL_HOST=sub.domain.com</span>
<span class="pi">-</span> <span class="s">VIRTUAL_PORT=3000</span>
<span class="pi">-</span> <span class="s">LETSENCRYPT_HOST=sub.domain.com</span>
<span class="s">volumes</span><span class="pi">:</span> <span class="nl">&volumes</span>
<span class="pi">-</span> <span class="s">./volumes/app/log:/app/log</span>
<span class="pi">-</span> <span class="s">./volumes/app/tmp:/app/tmp</span>
<span class="s">networks</span><span class="pi">:</span>
<span class="s">default</span><span class="pi">:</span>
<span class="s">external</span><span class="pi">:</span>
<span class="s">name</span><span class="pi">:</span> <span class="s">webproxy</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>You might have noticed that we need to create the network referenced in the <code class="highlighter-rouge">docker-compose.yml</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>docker network create webproxy
</pre></td></tr></tbody></table>
</div>
</div>
<p>Then pull the images and start the app.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2</pre></td><td class="code"><pre>docker-compose pull
docker-compose up -d
</pre></td></tr></tbody></table>
</div>
</div>
<h2 id="conclusion">Conclusion</h2>
<p>I hope this post could show you how easy it is to setup an SSL
encrypted web host with docker-compose on a simple server.</p>
<p>Further credits go
to
<a href="https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion">evertramos/docker-compose-letsencrypt-nginx-proxy-companion</a>,
which proposes a more generalized version to bootstrap your
environment.</p>Arthur AndersenLet’s assume you have a simple root server and want to deploy a demo
environment for your application. There is an easy docker-compose
template you can use to deploy a ready-to-use environment.How to write a bug report2018-02-19T00:00:00+00:002018-02-19T00:00:00+00:00https://blog.velalu.qa/development/bugs/2018/02/19/how-to-write-a-bug-report<p>In the life of any software project there are bugs. Granted, there are
different types of bugs, but you will never encounter a software
project in which everything goes as planned. This post describes how
to report on bugs in a way that helps your developers to quickly fix
them.</p>
<!-- split -->
<p>Let’s assume you are testing your software product. You are looking
forward to test out the new feature that you were looking forward to
for the last two weeks.</p>
<p>With your browser you sign in to your web application and try to
download the PDF that should be presented as a new report in your
App’s management console.</p>
<p>It looks fine. You do some changes in the system and request another
report PDF. Something seems off. Your changes are not reflected within
the newly generated report.</p>
<p>How to report this issue?</p>
<h2 id="wrong-user-requirement">Wrong User Requirement</h2>
<p>If your project is backed by user requirement specifications and a
proper test suite, then you should first consult the acceptance tests.</p>
<p>At velaluqa we like to write user story scenarios as acceptance tests
together with the customer to carve out the behavior of the software
system (see <a href="/development/testing/2018/02/05/what-i-talk-about-when-i-talk-about-testing.html">What I talk about, when I talk about testing</a>).</p>
<h3 id="requirement-not-satisfied-anymore">Requirement Not Satisfied Anymore</h3>
<p>One case could be that the acceptance test fails. This should not
happen frequently. Though it could happen if a developer implemented a
new feature or fixed a complex bug without properly checking the
continuously run test suite and its results. If that is the case, you
should report the failing scenario.</p>
<div class="language-text highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4</pre></td><td class="code"><pre>Failure: Scenario Fails
Feature: Download PDF Report
Scenario: Adding User is reflected in Report PDF
Steps tested manually: Yes/No
</pre></td></tr></tbody></table>
</div>
</div>
<p>If you tested the scenarios manually and observed unexpected behavior
it is helpful to report what you were able to observe. See the
last section
about <a href="#unexpected-failing-behavior">Unexpected Failing Behavior</a>.</p>
<h3 id="requirements-changed">Requirements Changed</h3>
<p>Another case could be that the scenarios for a certain feature are
not comprehensive enough. Although all requirements as stated by the
user story — its scenarios and acceptance tests — are satisfied, you
expected some additional or deviating behaviour.</p>
<p>If that is the case you should adjust the requirements. For this,
either create a new scenario or create a feature request.</p>
<h2 id="unexpected-failing-behavior">Unexpected Failing Behavior</h2>
<p>These are bugs that are not reflected in or caught by any test scenarios. Some examples might be:</p>
<blockquote>
<ul>
<li>providing some custom inputs for a form field and when submitting the form you see a blank page</li>
<li>requesting an archived download of some data but you are redirected to an error page</li>
<li>starting long running export task fails after 2 hours with error message</li>
</ul>
</blockquote>
<p>These are some examples of unexpected behavior. You could report these
lines as I presented them here. But they do not give a proper answer
to the most important questions:</p>
<ul>
<li><strong>What?</strong>
<ul>
<li>What happened with the application?</li>
<li>What did the application report as error message?</li>
</ul>
</li>
<li><strong>When?</strong>
<ul>
<li>When did the error occur?</li>
</ul>
</li>
<li><strong>How?</strong>
<ul>
<li>Is it reproducible?</li>
<li>How did you interact with the system to produce the error?</li>
<li>What did you expect and what was the actual behavior?</li>
</ul>
</li>
<li><strong>Where?</strong>
<ul>
<li>Where did the error occur?</li>
<li>Which server?</li>
<li>Which environment/IP?</li>
<li>Which operating system/browser and version?</li>
</ul>
</li>
</ul>
<p>The outline for a good bug report should look like this:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9</pre></td><td class="code"><pre>- Title
- Affected software version
- Affected operating system incl. version number
- Affected browser incl. version number
- Date & Time (when the error occurred)
- Steps to Reproduce
- Expected behaviour
- Actual behaviour
- Attachments (Screenshots or log files)
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="title">Title</h3>
<p>The title should provide a concise explanation of the issue.</p>
<blockquote>
<p>Cannot add <code class="highlighter-rouge">admin</code> role to user in edit user form</p>
</blockquote>
<h3 id="affected-software-version">Affected software version</h3>
<p>This is necessary information. Provide at least one version number for
which the bug can be reproduced.</p>
<blockquote>
<p>Version 2.1.2</p>
</blockquote>
<h3 id="affected-operating-system">Affected operating system</h3>
<p>Especially in the sphere of the web the client-side software can be
the cause for unexpected behavior. This includes the operating system.</p>
<p>MacOS, Windows & Linux have different ways of accomplishing the same
thing. Even different versions of Windows introduce major changes
under the hood which can cause changes in behavior.</p>
<p>Please provide all operating systems which you have tested and provide
information about whether the bug is reproducible or not. This will
help identifying operating-system-specific bugs.</p>
<blockquote>
<p><strong>Affected Operating System:</strong><br /></p>
<ul>
<li>MacOS 10.13.1 (reproducible, see below),</li>
<li>ArchLinux 4.14.15-1-ARCH #1 SMP PREEMPT x86_64 GNU/Linux (not-reproducible)</li>
</ul>
</blockquote>
<h3 id="affected-browser">Affected browser</h3>
<p>More common are differences in various browser-engines.</p>
<p>Please provide a list of all browser versions which you have tested
and provide information about whether the bug is reproducible or not.
This will help identifying browser-specific bugs.</p>
<blockquote>
<p><strong>Affected Browser:</strong> <br /></p>
<ul>
<li>Chrome 59.0.64.1 (reproducible, see below),</li>
<li>Firefox 59.0a1 (2018-01-21) (64-bit) (not-reproducible)</li>
</ul>
</blockquote>
<h3 id="date--time">Date & Time</h3>
<p>Provide the date and time optimally to the second.</p>
<p>If your issue has to do with errors occuring in other — possibly
remote — components, then we can look into the log files these
components to identify the problem.</p>
<blockquote>
<p><strong>Issue at:</strong> 2018-02-18 10:33:12</p>
</blockquote>
<h3 id="steps-to-reproduce">Steps to Reproduce</h3>
<p>These explain the <strong>how</strong>, describing where in the system the action
was performed, which role the testing user had within the system and
other similar preconditions.</p>
<p>Don’t be too specific</p>
<blockquote>
<p><strong>Steps to Reproduce:</strong></p>
<ol>
<li>Logged in as an admin user</li>
<li>Open admin back-end</li>
<li>Go to users list</li>
<li>Click Edit User (choose any non-admin user)</li>
</ol>
</blockquote>
<h4 id="expected-behavior">Expected Behavior</h4>
<blockquote>
<p><strong>Expected Behavior:</strong><br />
See a list of roles to choose from, then choosing admin and saving user should add the admin role.</p>
</blockquote>
<h4 id="actual-behavior">Actual Behavior</h4>
<blockquote>
<p><strong>Actual Behavior:</strong><br />
I do not see a list of roles to choose from.</p>
</blockquote>
<h3 id="attachments">Attachments</h3>
<p>If possible attach additional files for support. Something like a
screenshot before and after something went wrong can help the
developer to quickly jump into the problem.</p>
<p>Also try to attach all log files that you might have access to. On
pure web applications most logs are on the server, but you could
attach logs from the <a href="https://webmasters.stackexchange.com/questions/8525/how-do-i-open-the-javascript-console-in-different-browsers">JavaScript console</a>.</p>
<h2 id="example">Example</h2>
<div class="language-text highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35</pre></td><td class="code"><pre>Title: Cannot add `admin` role to user in edit user form
Version 2.1.2
Affected Operating System:
- MacOS 10.13.1 (reproducible, see below)
- ArchLinux 4.14.15-1-ARCH #1 SMP PREEMPT x86_64 GNU/Linux (not-reproducible)
Affected Browser:
- Chrome 59.0.64.1 (reproducible, see below)
- Firefox 59.0a1 (2018-01-21) (64-bit) (not-reproducible)
Issue at: 2018-02-18 10:33:12
Steps to Reproduce:
1. Logged in as an admin user
2. Open admin back-end
3. Go to users list
4. Click Edit User (choose any non-admin user)
Expected Behavior:
See a list of roles to choose from, then choosing admin and saving
user should add the admin role.
Actual Behavior:
I do not see a list of roles to choose from.
Attachments:
[Screenshot of the view in question]
</pre></td></tr></tbody></table>
</div>
</div>
<h2 id="tldr">TL;DR</h2>
<p>To ensure the bug can be fixed quickly, you should provide enough
information answering these basic questions:</p>
<ul>
<li>What?</li>
<li>When?</li>
<li>How?</li>
<li>Where?</li>
</ul>
<p>For this you can use the following outline:</p>
<div class="language-text highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18</pre></td><td class="code"><pre>Title:
Affected software version:
Affected operating system incl. version number:
Affected browser incl. version number:
Date & Time (when the error occurred):
Steps to Reproduce:
Expected behaviour:
Actual behaviour:
Attachments (Screenshots or log files):
</pre></td></tr></tbody></table>
</div>
</div>Arthur AndersenIn the life of any software project there are bugs. Granted, there are
different types of bugs, but you will never encounter a software
project in which everything goes as planned. This post describes how
to report on bugs in a way that helps your developers to quickly fix
them.What I talk about, when I talk about testing2018-02-05T00:00:00+00:002018-02-05T00:00:00+00:00https://blog.velalu.qa/development/testing/2018/02/05/what-i-talk-about-when-i-talk-about-testing<p>A big part of software development is quality assurance. We want to
make sure that we implement the correct behaviour according to
specifications and that we don’t introduce bugs when deploying a new
version. For this, every project depends on a rigorous testing
process, either manual or automated. We like to let our development
process be guided by automated tests so most of the time they become
the first code we write.</p>
<!-- split -->
<p>In the
post
<a href="/tech/developer/2017/12/02/when-10x-becomes-one-tenth.html">When 10x becomes 1/10th</a> I
described how a project can get derailed by accumulating too much
technical debt.</p>
<p>One core-reason for technical debt was the lack of a
proper test suite that lets developers recognize regressions — bugs
introduced while implementing new features or performing other changes
to the system — early on and fix them before they reach the production
environments.</p>
<p>There are other reasons for technical debt, which may be mitigated
through proper testing processes.</p>
<p>For instance you can write requirement specifications that are backed
by automated tests. Thus ensuring that the specifications are precise
and verifiable. This also helps with refactoring, because we can
ensure to keep the behaviour specified by the acceptance tests.</p>
<p>If done correctly testing can help to keep your team’s velocity at a
constant pace and help your team to stay confident when performing
code changes.</p>
<h2 id="levels-of-testing">Levels of Testing</h2>
<p>There are many levels of testing and on each level there are different
types of tests. When developing there are also many ways on how to go
through these levels when developing software products:</p>
<ul>
<li><strong>Top-to-bottom approach</strong>, where we develop larger integrated specifications on the system level (how the user is supposed to interact with the system), and work our way down</li>
<li><strong>Bottom-up approach</strong>, where we develop the most granular levels first and combine them in the upper levels</li>
<li><strong>Sandwich approach</strong>, where we combine the latter two approaches</li>
</ul>
<p>At velaluqa we like to work mainly with the top-to-bottom approach.</p>
<h3 id="system-testing">System Testing</h3>
<p>A system test is a kind of black-box test, describing the behaviour of
the system as the user sees and interacts with it. The user can be
either a real person or another system interacting with an API.</p>
<p>Mostly these tests define scenarios for each feature in which the user
interacts with the system through her browser. Thus testing the
graphical user interface, functionality and components beneath.</p>
<p>Optimally these tests are written with the customer so that they can
be used as validation against the user requirement specifications.</p>
<h3 id="integration-testing">Integration Testing</h3>
<p>Integration tests are a level below system tests. They are written by
the developers to test the behaviour of single components when they
are combined with other components.</p>
<p>This helps testing the software design and the interaction of singular
components.</p>
<h3 id="unit-testing">Unit Testing</h3>
<p>Unit Tests check single components. Usually they test the smallest part
of a system that can be tested independently — a unit. This can be a
module but most of the time we test single functions or procedures.</p>
<p>Since we want to test single components we try to make sure that they
are tested independently from others. Usually fakes (mock objects,
method stubs and similar) are injected to separate the components from
the depending parts of the system.</p>
<p>Most of the time this helps to keep the components decoupled which
results in cleaner code.</p>
<h2 id="how-we-test">How we test</h2>
<p>We favor the top-to-bottom approach. When we start a new project we
like to capture the user requirements as user stories. Stories that
describe how the user is supposed to interact with the software
product.</p>
<p>Together with the customer we describe different scenarios for each
feature or user story which describes the user interaction step-by-step in
the Gerkhin language. This language is a human-readable, domain
specific language in this format:</p>
<div class="language-cucumber highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13</pre></td><td class="code"><pre><span class="kd">Feature</span><span class="p">:</span> Some terse yet descriptive text of what is desired
Textual description of the business value of this feature
Business rules that govern the scope of the feature
Any additional information that will make the feature easier to understand
<span class="kn">Scenario</span><span class="p">:</span> Some determinable business situation
<span class="nf">Given</span> some precondition
<span class="nf">And</span> some other precondition
<span class="nf">When</span> some action by the actor
<span class="nf">And</span> some other action
<span class="nf">And</span> yet another action
<span class="nf">Then</span> some testable outcome is achieved
<span class="nf">And</span> something else we can check happens too
</pre></td></tr></tbody></table>
</div>
</div>
<p>The scenario for a feature may look like this:</p>
<div class="language-cucumber highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18</pre></td><td class="code"><pre><span class="kd">Feature</span><span class="p">:</span> List Products
As product manager,
I want to see a list of products in the system.
<span class="kn">Background</span><span class="p">:</span>
<span class="nf">Given</span> a product <span class="s">"Full-HD Projector"</span> exists for 1299.99 EUR
<span class="kn">Scenario</span><span class="p">:</span> Unauthorized
<span class="nf">Given</span> I sign in as a user with all permissions
<span class="nf">But</span> I cannot read products
<span class="nf">When</span> I browse to the products list
<span class="nf">Then</span> I see the unauthorized page
<span class="kn">Scenario</span><span class="p">:</span> Authorized
<span class="nf">Given</span> I sign in as a user with role <span class="s">"Product Manager"</span>
<span class="nf">When</span> I browse to the products list
<span class="nf">Then</span> I see <span class="s">"Products"</span>
<span class="nf">And</span> I see <span class="s">"Full-HD Projector 1299.99 EUR"</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>Then we provide each step with reusable test code that drives a real
browser or mobile emulation environment to perform the actions within
the software just as a normal user would.</p>
<p>These tests are system tests that not only test the whole system and
all the lower components but also serve as documentation and
validation against the user requirement specifications.</p>
<p>The system tests guide us to the next layers of tests: Integration
Tests. For a web application for instance we would create routing
tests that check for the necessary URL to be handled by the web
application server. Then we would test the controller which handles
the given parameters and passes them on to the models. Then we test
the models to make sure that they communicate correctly with the
database and that our data is saved and retrieved correctly. Finally
we test the views — what the user sees — and check that they are sent
to the user with the correct information.</p>
<p>While doing this, each code change triggers a rerun of the specific
component tests or integration tests so that the developer is
constantly in the loop to recognize bugs early on.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Obviously this process of software development cannot guarantee
bug-free software, but it can imply a certain level of confidence,
that the set of defined scenarios are working correctly.</p>
<p>The right amount of test coverage is project-specific but you should
never go without testing. It helps to keep velocity and confidence,
helps to refactor safely and thus allows to fight technical debt.</p>
<p>Tests provide a sort of living documentation of the system. Developers
who are interested in learning what kind of functionality is provided
by the system or component can look into the respective system or unit
tests to gain a basic understanding.</p>
<p>When using a test-driven approach, as I described in “How we test”,
the combination of writing tests from the top — specifying the
interface first - working down to the bottom and refactoring along the
way may take the place of formal design. For instance each unit test
can be seen as a design element specifying classes, methods and
observable behaviour.</p>
<h2 id="tldr">TL;DR</h2>
<p>I hope this piece could give you a basic understanding of what we like
about testing and how we incorporate it into our process. If you have
any questions, please don’t hesitate to leave a message either in the
comments or via e-mail.</p>Arthur AndersenA big part of software development is quality assurance. We want to
make sure that we implement the correct behaviour according to
specifications and that we don’t introduce bugs when deploying a new
version. For this, every project depends on a rigorous testing
process, either manual or automated. We like to let our development
process be guided by automated tests so most of the time they become
the first code we write.Reseller vs. Payment Provider2018-01-05T00:00:00+00:002018-01-05T00:00:00+00:00https://blog.velalu.qa/product/2018/01/05/reseller-vs-payment-provider<p>You want to sell digital goods, typically some kind of stream or
download, like software, logos, designs, webinars, ebooks and so on.
Should you simply add a PayPal or Stripe checkout to your website? We
compare the concepts of payment providers and resellers.</p>
<!-- split -->
<p>Selling digital products is not that simple, at least if you want to
adhere to the laws of countries all over the world. If you sell a
digital good, you have to collect VAT, sales tax or similar from your
buyers.</p>
<p>For example, if you sell to residents of the European Union, you have
to maintain and collect 28 different VAT rates for all member states
of the EU. You have to get a VAT identification number from the EU, or
from your member state, if you are located in the EU. On a regular
basis, a declaration needs to be sent to the fiscal authorities and
finally you have to pay the VAT - at least, to one single bank
account.</p>
<p>This is not limited to the European Union, other countries have
similar laws. If you want to sell digital goods and focus on your
product instead, you need a reseller. A reseller covers more
responsibility than typical payment providers like PayPal and Stripe.
It is placed between you and your customers. On the customer’s
receipts, the reseller appears as the seller of the product instead of
you (uhh, that makes sense, doesn’t it?). You have just one singular
contractual partner: the reseller, which pays you off periodically. Of
course, that higher responsibility comes with higher commissions.</p>
<h2 id="fastspring">FastSpring</h2>
<p>We’ve been using <a href="https://fastspring.com/">FastSpring</a> for a recent customer
project. FastSpring is a big reseller, based in the US. Within their
docs, they provide an
impressive <a href="https://docs.fastspring.com/getting-started-with-fastspring/currencies-conversions-and-taxes/about-vat-and-sales-tax#AboutVATandSalesTax-EUChangesin2015">list of taxes they do manage</a> for
you. FastSpring offers a complete featureset for almost every
use-case.</p>
<p>A few words about how FastSpring works: FastSpring can either provide
a complete web shop for you, or you can integrate it deeply within
your own website. In that case, FastSpring will open a modal layer for
checkout. For each buyer, FastSpring creates a password-less account,
identified by e-mail address. After every successful checkout, they
send out a link to manage all purchases and subscriptions. FastSpring
offers a lot of options to serve your digital goods and/or license
keys after checkout, called <em>fulfillments</em>.</p>
<p>Finally, from the developer’s point of view, you can activate
webhooks, call their APIs and make use of a powerful management
dashboard and easy testing modes. The technical documentation is
comprehensive, but sometimes inconsistent. I recommend to schedule
enough time at least for your first FastSpring project to figure out
all the details. The support responses in a few hours and the
activation, which is required before charging real credit cards, was
completed within 24h.</p>
<h2 id="alternatives">Alternatives</h2>
<p>I have seen these similar providers so far, of course there will be
more out there:</p>
<ul>
<li><a href="https://www.mycommerce.com/">MyCommerce</a></li>
<li><a href="http://www.avangate.com/">Avangate</a></li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>You should choose carefully whether you need a reseller or just a
payment provider. A reseller is more expensive, but it can help you to
abide by the law and handles stuff like taxes and invoicing for you,
letting you focus on your product!</p>Franz KißigYou want to sell digital goods, typically some kind of stream or
download, like software, logos, designs, webinars, ebooks and so on.
Should you simply add a PayPal or Stripe checkout to your website? We
compare the concepts of payment providers and resellers.When 10x becomes 1/10th2017-12-02T00:00:00+00:002017-12-02T00:00:00+00:00https://blog.velalu.qa/tech/developer/2017/12/02/when-10x-becomes-one-tenth<p>Ready, set, go! You started your project with a team you trust. All
the planning is done, the team knows your goals and gets to work.
The first artifact is ready for testing after a two-week kick-off
sprint. You are amazed by how quickly first changes are implemented.
You give feedback and adjust some requirements for the next sprint.
After a few weeks your desired changes are made and some more features
are implemented.</p>
<!-- split -->
<p>Development goes on like this for a few months, but you recognize some
glitches. Features take longer, bugs are introduced which take
noticeably more time to get fixed than in the beginning. Team members
seem to be less motivated, people leave and you have find replacements
for these vacant positions.</p>
<p>Velocity declines further, it seems hard to get people on board, and
even though you had a very good feeling about the newly hired
developers, they do not perform the way you expect them to. It gets
much harder to keep up with deadlines and people get fed up with the
rising pressure.</p>
<p>More people are leaving and with this large number of resignations you
also loose your last most-incorporated developer, which has been part
of the project since the beginning. Suddenly the productivity drops to
a point where you have to think deeply about how to continue…</p>
<p>This is a short depiction of how the lifecycle of a software project
could look like. In the first few weeks when you are starting off with
a new team, everyone knows the goals from principle and is in the
loop seeing the code grow. They know a lot about the components and
usually see what is to be done immediately. This applies to
implementing new features and to fixing bugs.</p>
<p>Everyone was enthusiastic about the speed of development, but perhaps
due to a timely deadline for first prototypes, decisions were made to
defer documentation and tests. But over the course of the project
these decisions where not reiterated.</p>
<p>No human being can keep all the decisions in its head. This leads to
overhead because documentation and specifications in form of tests are
still missing. Now, when people leave, you loose the staff that knew
about the system, meanwhile complexity grows further.</p>
<p>Newly hired staff is not able to gain the same profound understanding
of the system with all its irks and quirks. Until you loose the last
person that has been with the project from the beginning.</p>
<h2 id="technical-debt">Technical Debt</h2>
<p>This process is called accumulation of technical debt. And what our
team missed is to work actively against it.</p>
<blockquote>
<p>As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.
<cite>— Meir Manny Lehmann <sup id="fnref:law2"><a href="#fn:law2" class="footnote">1</a></sup></cite></p>
</blockquote>
<p>Common causes of technical debt are:</p>
<ul>
<li><strong>Imprecise specifications</strong>, where requirements are still being
defined during development before any design is done. This might
speed up development but has to be reworked later on.</li>
<li><strong>Business pressures</strong>, where the client puts pressure on the
development team to release early.</li>
<li><strong>Lack of process or understanding</strong>, where businesses are blind to
the concept of technical debt, and make decisions without
considering the implications.</li>
<li><strong>Lack of test suite</strong>, which encourages quick and risky band-aids
to fix bugs. This erodes the system’s architecture if not designed
properly.</li>
<li><strong>Lack of documentation</strong>, where code is not documented
sufficiently. Making it harder for new people to jump into the code
and be productive.</li>
<li><strong>Lack of alignment to standards</strong>, where industry standard
features, frameworks, technologies and best practices are ignored.</li>
<li><strong>Lack of communication & collaboration</strong>, where knowledge is not
shared across your team and efficiency suffers.</li>
<li><strong>Lack of ownership</strong>, when outsourced software efforts result in
bad code, which has to be refactored or rewritten in-house.</li>
<li><strong>Delayed refactoring</strong>, when parts of the code become obsolete or
inefficient, because of requirements that evolved over time. The
longer refactoring of these parts of the code is delayed and new
features are implemented on top of them, the more technical debt
accumulates.</li>
<li><strong>Last-minute specification changes</strong>, which are implemented without
sufficient design, documentation and checks, because of shortage in
time or budget.</li>
</ul>
<p>Per definition you incur debt with the first code you ship. Some debt
is necessary to keep velocity, but only if it is addressed as quickly
as possible. The longer you postpone working against your debt, the
harder it becomes to eradicate it again.</p>
<p><a href="https://de.wikipedia.org/wiki/Ward_Cunningham">Ward Cunningham</a>
(pioneer in both design patterns and extreme programming) first
described the analogy debt with complexity of software systems:</p>
<blockquote>
<p>Shipping first time code is like going into debt. A little debt
speeds development so long as it is paid back promptly with a rewrite…
The danger occurs when the debt is not repaid. Every minute spent on
not-quite-right code counts as interest on that debt. Entire
engineering organizations can be brought to a stand-still under the
debt load of an unconsolidated implementation, object-oriented or
otherwise.
<cite>— Ward Cunningham <sup id="fnref:wardcunn"><a href="#fn:wardcunn" class="footnote">2</a></sup></cite></p>
</blockquote>
<p>So we see technical debt is inevitable, but we can address it.</p>
<h2 id="types-of-technical-debt">Types of Technical Debt</h2>
<p>The worst scenario is the one where team members are not aware of the
debt they create, but it is perfectly fine when it is the result of a
conscious decision to achieve a certain outcome and is addressed as
soon as possible.</p>
<p><a href="https://en.wikipedia.org/wiki/Martin_Fowler">Martin Fowler</a>
differentiates the following types of technical debt based on two
dichotomous categories: reckless vs. prudent and deliberate vs.
inadvertent:</p>
<table>
<tbody>
<tr>
<td> </td>
<td><strong>reckless</strong></td>
<td><strong>prudent</strong></td>
</tr>
<tr>
<td><strong>deliberate</strong></td>
<td>“We do not have time for design”</td>
<td>“We must ship now and deal with consequences (later)”</td>
</tr>
<tr>
<td><strong>inadvertent</strong></td>
<td>“What is Layering?”</td>
<td>“Now we know how we should have done it”</td>
</tr>
</tbody>
</table>
<p>In our scenario in the beginning of the post, many things were
ignored. The development slided unwittingly into deliberate but reckless
debt and became aware of it too late.</p>
<p>They lacked:</p>
<ul>
<li><strong>Quality Assurance</strong> — via tests, continuous integration</li>
<li><strong>Collaboration</strong> — resulting in knowledge being isolated and only accessible by the most senior developers</li>
<li><strong>Documentation</strong> — leaving new developers with only the code</li>
<li>and perhaps a few more …</li>
</ul>
<h2 id="summary">Summary</h2>
<p>The title of this post plays with the notion that developers who are
more productive than others are sometimes called 10x developers, but
in terms of technical debt I would argue, that they are only 10x
because they accumulate technical debt, borrowing the productivity
from tomorrow causing your future team to run at 1/10th the velocity.</p>
<p>To avoid the sudden wakening, you have to anticipate the debt as you
create it. And furthermore, you have to train your team to do the
same. Plan and hire accordingly. Try to build a team that builds
software for the people after them and not for the money or the
deadline alone.</p>
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes">
<ol>
<li id="fn:law2">
<p>Lehman, MM (1996). <a href="http://dl.acm.org/citation.cfm?id=681473">“Laws of Software Evolution Revisited”</a>. EWSPT ‘96 Proceedings of the 5th European Workshop on Software Process Technology: 108–124. <a href="#fnref:law2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:wardcunn">
<p>Ward Cunningham (1992-03-26). <a href="http://c2.com/doc/oopsla92.html">“The WyCash Portfolio Management System”</a>. <a href="#fnref:wardcunn" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Arthur AndersenReady, set, go! You started your project with a team you trust. All
the planning is done, the team knows your goals and gets to work.
The first artifact is ready for testing after a two-week kick-off
sprint. You are amazed by how quickly first changes are implemented.
You give feedback and adjust some requirements for the next sprint.
After a few weeks your desired changes are made and some more features
are implemented.Rails in Docker via AWS Elastic Beanstalk2017-11-06T00:00:00+00:002017-11-06T00:00:00+00:00https://blog.velalu.qa/admin/tech/ruby/rails/docker/2017/11/06/rails-in-docker-on-aws-elastic-beanstalk<p>AWS Elastic Beanstalk is a toolkit to deploy web application server
environments to AWS Elastic Compute Cloud. It is designed to get a new
environment running as quickly as possible. Everything can be done
from the command-line which helps automating the process.</p>
<p>This post describes how to deploy a new Ruby on Rails application with
Sidekiq to the Elastic Compute Cloud leveraging its database services
(AWS Relational Database Service and AWS ElastiCache).</p>
<!-- split -->
<h2 id="what-is-elastic-beanstalk">What is Elastic Beanstalk?</h2>
<p>AWS Elastic Beanstalk is an orchestration service for deploying
infrastructure to various AWS services like EC2, S3, Simple
Notification Service, CloudWatch, autoscaling, and Elastic Load
Balancers.</p>
<p>It supports various applications and software stacks:</p>
<ul>
<li>Ruby, PHP, Python applications on Apache HTTP Server</li>
<li>.NET Framework applications on IIS 7.5</li>
<li>Java applications on Apache Tomcat</li>
<li>Node.js applications</li>
<li>Docker containers</li>
</ul>
<p>In
<a href="/admin/tech/ruby/rails/docker/2017/10/31/rails-5-within-docker-containers.html">“Ruby on Rails 5 in Docker”</a> we
explained how to create a deployable docker infrastructure for a
simple Rails application. This post introduces you to deployment of
this environment to a first AWS infrastructure via Elastic Beanstalk.</p>
<h2 id="benefits">Benefits</h2>
<ul>
<li>Quickly deploy tested docker environments to Amazon Web Services</li>
<li>Configuration can be automated via CLI</li>
<li>All settings can be managed inside the AWS Management Console</li>
<li>Highly scalable with all AWS functionality</li>
</ul>
<h2 id="components">Components</h2>
<p>In the “Docker inside Rails” blog post we configured PostgreSQL and
Redis as separate services within out <code class="highlighter-rouge">docker-compose</code> file.</p>
<p>We could just deploy the services as defined via Docker Compose onto a
set of simple EC2 instances, but AWS provides separate services for
databases and in-memory data stores.</p>
<p>We suggest leveraging as much AWS managed services as possible instead
of managing them completely on our own. This will allow us to use all
autoscaling and load-balancing features without much work-around.</p>
<p><img src="/assets/images/posts/2017-11-06-rails-in-docker-on-aws-elastic-beanstalk/aws.svg" alt="" /></p>
<h3 id="aws-relational-database-service">AWS Relational Database Service</h3>
<p>AWS RDS helps with the configuration, management and scaling of
relational databases in the cloud.</p>
<p>It provides six known database engines:</p>
<ul>
<li><strong>PostgreSQL</strong></li>
<li>MySQL/MariaDB</li>
<li>Oracle</li>
<li>Microsoft SQL Server</li>
<li>Amazon Aurora</li>
</ul>
<h3 id="aws-elasticache">AWS ElastiCache</h3>
<p>AWS EC is a managed in-memory data-store in the cloud.</p>
<p>It provides two open-source-in-memory-engines:</p>
<ul>
<li><strong>Redis</strong></li>
<li>Memcached</li>
</ul>
<h3 id="aws-ec2-instance">AWS EC2 Instance</h3>
<p>For the purpose of demonstrating all steps necessary we restrict ourselves to one EC2 ~~</p>
<h2 id="step-1--create-an-aws-account">Step 1 — Create an AWS account</h2>
<p>If not already registered with Amazon Webservices. Go to
<a href="https://aws.amazon.com">aws.amazon.com</a> and create a new account.</p>
<p>Then go to <strong>My Security Credentials</strong> and create a new set of
credentials. You need these to use the AWS command-line interface.</p>
<h2 id="step-2--install-cli-tools--initialize-elastic-beanstalk">Step 2 — Install CLI Tools & Initialize Elastic Beanstalk</h2>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3</pre></td><td class="code"><pre>sudo pip install --upgrade awsebcli awscli
<span class="nb">cd</span> ~/projects/my-app
eb init
</pre></td></tr></tbody></table>
</div>
</div>
<p>This lead you through a wizard and create a new application within
your AWS cloud.</p>
<h2 id="step-3--configure-container-registry">Step 3 — Configure Container Registry</h2>
<p>With Elastic Beanstalk you can use any registry you want, either
official Dockerhub, your own private repository or a new private
container repository via AWS Elastic Container Registry (ECR):</p>
<h3 id="using-aws-elastic-container-registry">Using AWS Elastic Container Registry</h3>
<p>First we create a new registry repository <code class="highlighter-rouge">my-app</code>.</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>aws ecr create-repository --repository-name my-app
</pre></td></tr></tbody></table>
</div>
</div>
<p>Then we allow the <code class="highlighter-rouge">aws-elasticbeanstalk-ec2-role</code> to access the container registry.</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly --role-name aws-elasticbeanstalk-ec2-role
</pre></td></tr></tbody></table>
</div>
</div>
<p>Now we can build the first container version and push it to the new AWS ECR.</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5</pre></td><td class="code"><pre><span class="nb">cd</span> ~/projects/my-app
<span class="sb">`</span>aws ecr get-login --no-include-email --region eu-central-1<span class="sb">`</span>
docker build -t my-app .
docker tag my-app:latest .dkr.ecr.eu-central-1.amazonaws.com/my-app:latest
docker push .dkr.ecr.eu-central-1.amazonaws.com/my-app:latest
</pre></td></tr></tbody></table>
</div>
</div>
<h2 id="step-4--create-service-configuration">Step 4 — Create Service Configuration</h2>
<p>AWS Elastic Beanstalk expects a <code class="highlighter-rouge">Dockerrun.aws.json</code> that describes
all your services similar to a <code class="highlighter-rouge">docker-compose</code> file.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20</pre></td><td class="code"><pre><span class="p">{</span><span class="w">
</span><span class="nt">"AWSEBDockerrunVersion"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w">
</span><span class="nt">"volumes"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="nt">"containerDefinitions"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"app"</span><span class="p">,</span><span class="w">
</span><span class="nt">"image"</span><span class="p">:</span><span class="w"> </span><span class="s2">".dkr.ecr.eu-central-1.amazonaws.com/velaluqa-test:latest"</span><span class="p">,</span><span class="w">
</span><span class="nt">"environment"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="nt">"essential"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nt">"memory"</span><span class="p">:</span><span class="w"> </span><span class="mi">128</span><span class="p">,</span><span class="w">
</span><span class="nt">"mountPoints"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="nt">"portMappings"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nt">"containerPort"</span><span class="p">:</span><span class="w"> </span><span class="mi">3000</span><span class="p">,</span><span class="w">
</span><span class="nt">"hostPort"</span><span class="p">:</span><span class="w"> </span><span class="mi">80</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></pre></td></tr></tbody></table>
</div>
</div>
<h3 id="testing-your-configuration">Testing Your Configuration</h3>
<p>You can run your services locally to see if everything is set up correctly:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>eb local run
</pre></td></tr></tbody></table>
</div>
</div>
<h2 id="step-5--create-elastic-beanstalk-environment">Step 5 — Create Elastic Beanstalk Environment</h2>
<p>Upon creating the environment your app starts on EC2. When everything is
working correctly you should see a log for your deployment.</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7</pre></td><td class="code"><pre>eb create <span class="se">\</span>
--database <span class="se">\</span>
-db.engine postgres <span class="se">\</span>
-db.i db.t2.micro <span class="se">\</span>
-db.size 5 <span class="se">\</span>
-db.version 9.6.5 <span class="se">\</span>
--envvars <span class="nv">RAILS_ENV</span><span class="o">=</span>production,SECRET_KEY_BASE<span class="o">=</span>good_secret_key_base,PORT<span class="o">=</span>3000
</pre></td></tr></tbody></table>
</div>
</div>
<p>To open your webapp try executing <code class="highlighter-rouge">eb open</code>.</p>
<h2 id="tipps">Tipps</h2>
<h3 id="adding-sidekiq-worker">Adding Sidekiq Worker</h3>
<p>You can easily add a Sidekiq back-end for Rails <code class="highlighter-rouge">ActiveJob</code>. As
persistent back-end you can use AWS ElastiCache and its Redis engine.</p>
<p>Create a new directory <code class="highlighter-rouge">./.ebextensions</code> and create two files:</p>
<p>In <code class="highlighter-rouge">.ebextensions/elasticache.config</code> add:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43</pre></td><td class="code"><pre><span class="s">Resources</span><span class="pi">:</span>
<span class="s">MyCacheSecurityGroup</span><span class="pi">:</span>
<span class="s">Type</span><span class="pi">:</span> <span class="s2">"</span><span class="s">AWS::EC2::SecurityGroup"</span>
<span class="s">Properties</span><span class="pi">:</span>
<span class="s">GroupDescription</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Lock</span><span class="nv"> </span><span class="s">cache</span><span class="nv"> </span><span class="s">down</span><span class="nv"> </span><span class="s">to</span><span class="nv"> </span><span class="s">webserver</span><span class="nv"> </span><span class="s">access</span><span class="nv"> </span><span class="s">only"</span>
<span class="s">SecurityGroupIngress</span> <span class="pi">:</span>
<span class="pi">-</span> <span class="s">IpProtocol</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">tcp"</span>
<span class="s">FromPort</span> <span class="pi">:</span>
<span class="s">Fn::GetOptionSetting</span><span class="pi">:</span>
<span class="s">OptionName</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">CachePort"</span>
<span class="s">DefaultValue</span><span class="pi">:</span> <span class="s2">"</span><span class="s">6379"</span>
<span class="s">ToPort</span> <span class="pi">:</span>
<span class="s">Fn::GetOptionSetting</span><span class="pi">:</span>
<span class="s">OptionName</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">CachePort"</span>
<span class="s">DefaultValue</span><span class="pi">:</span> <span class="s2">"</span><span class="s">6379"</span>
<span class="s">SourceSecurityGroupName</span><span class="pi">:</span>
<span class="s">Ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">AWSEBSecurityGroup"</span>
<span class="s">MyElastiCache</span><span class="pi">:</span>
<span class="s">Type</span><span class="pi">:</span> <span class="s2">"</span><span class="s">AWS::ElastiCache::CacheCluster"</span>
<span class="s">Properties</span><span class="pi">:</span>
<span class="s">CacheNodeType</span><span class="pi">:</span>
<span class="s">Fn::GetOptionSetting</span><span class="pi">:</span>
<span class="s">OptionName</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">CacheNodeType"</span>
<span class="s">DefaultValue</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">cache.t1.micro"</span>
<span class="s">NumCacheNodes</span><span class="pi">:</span>
<span class="s">Fn::GetOptionSetting</span><span class="pi">:</span>
<span class="s">OptionName</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">NumCacheNodes"</span>
<span class="s">DefaultValue</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">1"</span>
<span class="s">Engine</span><span class="pi">:</span>
<span class="s">Fn::GetOptionSetting</span><span class="pi">:</span>
<span class="s">OptionName</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">Engine"</span>
<span class="s">DefaultValue</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">redis"</span>
<span class="s">VpcSecurityGroupIds</span><span class="pi">:</span>
<span class="pi">-</span>
<span class="s">Fn::GetAtt</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">MyCacheSecurityGroup</span>
<span class="pi">-</span> <span class="s">GroupId</span>
<span class="s">Outputs</span><span class="pi">:</span>
<span class="s">ElastiCache</span><span class="pi">:</span>
<span class="s">Description</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">ID</span><span class="nv"> </span><span class="s">of</span><span class="nv"> </span><span class="s">ElastiCache</span><span class="nv"> </span><span class="s">Cache</span><span class="nv"> </span><span class="s">Cluster</span><span class="nv"> </span><span class="s">with</span><span class="nv"> </span><span class="s">Redis</span><span class="nv"> </span><span class="s">Engine"</span>
<span class="s">Value</span> <span class="pi">:</span>
<span class="s">Ref</span> <span class="pi">:</span> <span class="s2">"</span><span class="s">MyElastiCache"</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>And in <code class="highlighter-rouge">.ebextensions/options.config</code> add:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6</pre></td><td class="code"><pre><span class="s">option_settings</span><span class="pi">:</span>
<span class="s2">"</span><span class="s">aws:elasticbeanstalk:customoption"</span><span class="pi">:</span>
<span class="s">CacheNodeType</span> <span class="pi">:</span> <span class="s">cache.t1.micro</span>
<span class="s">NumCacheNodes</span> <span class="pi">:</span> <span class="s">1</span>
<span class="s">Engine</span> <span class="pi">:</span> <span class="s">redis</span>
<span class="s">CachePort</span> <span class="pi">:</span> <span class="s">6379</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>Further reading is available in the <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-environment-resources-elasticache.html">AWS docs</a>.</p>
<p>To deploy run <code class="highlighter-rouge">eb deploy</code>.</p>
<p>Then configure your Rails application to use the environment variables
with the ElastiCache host information that you set for your environment.</p>
<p>And add the <code class="highlighter-rouge">worker</code> service to your <code class="highlighter-rouge">Dockerrun.aws.json</code>:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10</pre></td><td class="code"><pre><span class="p">{</span><span class="w">
</span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"worker"</span><span class="p">,</span><span class="w">
</span><span class="nt">"image"</span><span class="p">:</span><span class="w"> </span><span class="s2">".dkr.ecr.eu-central-1.amazonaws.com/velaluqa-test:latest"</span><span class="p">,</span><span class="w">
</span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"bundle exec sidekiq"</span><span class="p">,</span><span class="w">
</span><span class="nt">"environment"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="nt">"essential"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nt">"memory"</span><span class="p">:</span><span class="w"> </span><span class="mi">128</span><span class="p">,</span><span class="w">
</span><span class="nt">"mountPoints"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="nt">"portMappings"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></pre></td></tr></tbody></table>
</div>
</div>
<h3 id="branch-specific-environment">Branch-specific Environment</h3>
<p>Normally we use a CI service that pulls all changes of a commit for a
branch and deploys to the respective environment.</p>
<p>If you want to initiate deploy manually you might want to make sure,
that develop only gets deployed to your staging environment, but never
to production. For that you can configure <em>branch-defaults</em> in
<code class="highlighter-rouge">.elasticbeanstalk/config.yml</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7</pre></td><td class="code"><pre><span class="c1"># ...</span>
<span class="s">branch-defaults</span><span class="pi">:</span>
<span class="s">develop</span><span class="pi">:</span>
<span class="s">environment</span><span class="pi">:</span> <span class="s">my-app-dev-staging</span>
<span class="s">master</span><span class="pi">:</span>
<span class="s">environment</span><span class="pi">:</span> <span class="s">my-app</span>
<span class="c1"># ...</span>
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="remote-logs">Remote Logs</h3>
<p>In the management console you can see a variety of logs. To get these
into your console you can use:</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>eb logs
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="remote-access">Remote Access</h3>
<p>To easily <code class="highlighter-rouge">ssh</code> into your machines try <code class="highlighter-rouge">eb ssh</code>. After <code class="highlighter-rouge">sudo</code> you can
work commands like <code class="highlighter-rouge">docker ps</code> or hook into your running containers
via:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>docker exec -it <container-name> /bin/bash
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="load-balancing-and-websockets">Load Balancing and WebSockets</h3>
<p>Rails 5 introduced ActionCable which allows for real-time
communications over websockets. For WebSockets to work, you have to
encure that the Elastic Load Balancer listens on all TCP traffic not
just HTTP.</p>
<p>To change the setting via CLI type <code class="highlighter-rouge">eb config</code>, which will load your
environments config in your <code class="highlighter-rouge">$EDITOR</code>. Search for <code class="highlighter-rouge">aws.elb:listener</code>
and change the <code class="highlighter-rouge">InstanceProtocol</code> and <code class="highlighter-rouge">ListenerProtocol</code> from <code class="highlighter-rouge">HTTP</code>
to <code class="highlighter-rouge">TCP</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9</pre></td><td class="code"><pre><span class="c1"># ...</span>
<span class="s">aws:elb:listener:80</span><span class="pi">:</span>
<span class="s">InstancePort</span><span class="pi">:</span> <span class="s1">'</span><span class="s">80'</span>
<span class="s">InstanceProtocol</span><span class="pi">:</span> <span class="s">TCP</span>
<span class="s">ListenerEnabled</span><span class="pi">:</span> <span class="s1">'</span><span class="s">true'</span>
<span class="s">ListenerProtocol</span><span class="pi">:</span> <span class="s">TCP</span>
<span class="s">PolicyNames</span><span class="pi">:</span> <span class="s">null</span>
<span class="s">SSLCertificateId</span><span class="pi">:</span> <span class="s">null</span>
<span class="c1"># ...</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>After closing your editor, the configuration will be updated remotely.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Within a few minutes we created a private docker repository, pushed
our rails application container image and deployed a fresh environment
with all necessary services to AWS Elastic Compute Cloud.</p>
<p>I hope you gained some knowledge about how you could service your
application. This is just the beginning. Next up, you may want to
increase the number of EC2 instances, adjust deployment schemes,
configure autoscaling on when and how to add new instances/tasks or
maybe you formalize your infrastructure setup via AWS CloudFormation.</p>
<p>Possibilities are endless, but most importantly you got your product
out there and are ready to test!</p>Arthur AndersenAWS Elastic Beanstalk is a toolkit to deploy web application server
environments to AWS Elastic Compute Cloud. It is designed to get a new
environment running as quickly as possible. Everything can be done
from the command-line which helps automating the process.Ruby on Rails 5 in Docker2017-10-31T00:00:00+00:002017-10-31T00:00:00+00:00https://blog.velalu.qa/admin/tech/ruby/rails/docker/2017/10/31/rails-5-within-docker-containers<p>Docker helps packaging software into reusable containers. This allows
not only for standardized development environment across your team,
but also for scaling your production deployments on most major cloud
platforms. This post explains how to dockerize a Ruby on Rails app
with PostGreSQL, Redis and Sidekiq.</p>
<!-- split -->
<h2 id="benefits">Benefits</h2>
<p>Docker has benefits not only for production, but also in development:</p>
<ul>
<li>Tested containers in production</li>
<li>Production-like environment across development machines</li>
<li>Quick bootstrap of development machines for new team members</li>
<li>Version-controlled environment definition via <code class="highlighter-rouge">docker-compose</code></li>
<li>Support of major cloud service providers</li>
</ul>
<h2 id="dockerize-rails">Dockerize Rails</h2>
<p>The first thought you might want to give is, which base image to use.
This decision affects the size of the resulting image and the
available dependencies.</p>
<p>These are some of our current go-to images, when we create new docker
containers for Ruby applications:</p>
<ul>
<li><code class="highlighter-rouge">ruby:<version></code> - official Ruby image based on Debian with most
common Debian packages installed, so that our own Dockerfile does
not have to install these.</li>
<li><code class="highlighter-rouge">ruby:alpine</code> - official Ruby image based on Alpine Linux which is
much smaller than most linux distributions (~5MB). This image does
not include any extra packages. We have to install these packages
ourselves in our own <code class="highlighter-rouge">Dockerfile</code>.</li>
</ul>
<h3 id="dockerfile">Dockerfile</h3>
<p>Create a <code class="highlighter-rouge">Dockerfile</code> in the root directory of your Rails application:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36</pre></td><td class="code"><pre>FROM ruby:2.4
# Install dependencies
RUN apt-get update -qq && \
apt-get install -y --no-install-recommends build-essential libpq-dev nodejs && \
rm -rf /var/lib/apt/lists/*
# Set the root of your Rails application
ENV RAILS_ROOT /app
RUN mkdir -p $RAILS_ROOT
# Set working directory to the root path of the Rails app
WORKDIR $RAILS_ROOT
# Do not install gem documentation
RUN echo 'gem: --no-ri --no-rdoc' > ~/.gemrc
# If we copy the whole app directory, the bundle would install
# everytime an application file changed. Copying the Gemfiles first
# avoids this and installs the bundle only when the Gemfile changed.
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler && \
bundle install --jobs 20 --retry 5
# Now copy the application code to the application directory
COPY . /app
# This scripts runs `rake db:create` and `rake db:migrate` before
# running the command given
ENTRYPOINT ["lib/support/docker-entrypoint.sh"]
EXPOSE 3000
# Default command is starting the rails server
CMD ["bin/rails", "s", "-b", "0.0.0.0"]
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="dockerignore"><code class="highlighter-rouge">.dockerignore</code></h3>
<p>The <code class="highlighter-rouge">.dockerignore</code> file filters some files and folders before
starting the build process of the Docker image:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3</pre></td><td class="code"><pre>db/*.sqlite3
tmp
log/*
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="entry-point">Entry point</h3>
<p>The entry point is run for each container. This script makes sure that
the database exists and migrations are up to date.</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13</pre></td><td class="code"><pre><span class="c">#!/bin/bash</span>
<span class="nb">echo</span> <span class="s2">"Creating database if it's not present..."</span>
bin/rails db:create
<span class="nb">echo</span> <span class="s2">"Migrating database..."</span>
bin/rails db:migrate
<span class="c"># If the container has been killed, there may be a stale pid file</span>
<span class="c"># preventing rails from booting up</span>
rm -f tmp/pids/server.pid
<span class="nb">exec</span> <span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span>
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="service-configuration-via-docker-compose">Service Configuration via <code class="highlighter-rouge">docker-compose</code></h3>
<p><code class="highlighter-rouge">docker-compose</code> manages all containers needed for your environment. You
describe all services and how they interrelate in a
<code class="highlighter-rouge">docker-compose.yml</code> file and <code class="highlighter-rouge">docker-compose</code> takes care of starting
and linking them in the correct order.</p>
<p>The definition for a service contains the image, command, environment
variables, port mappings, container linkings and volume informations.</p>
<p>For our Rails app we need the following services:</p>
<ul>
<li>PostGreSQL</li>
<li>Redis</li>
<li>Web Application Server (Rails)</li>
<li>Background Worker (Sidekiq)</li>
<li>Test Runner (Guard)</li>
</ul>
<p>The actual definition looks like this:</p>
<div class="language-ruby highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47</pre></td><td class="code"><pre><span class="ss">postgres:
image: </span><span class="n">postgres</span><span class="p">:</span><span class="mi">9</span><span class="o">.</span><span class="mi">6</span>
<span class="ss">ports:
</span><span class="o">-</span> <span class="mi">5432</span><span class="p">:</span><span class="mi">5432</span>
<span class="ss">volumes:
</span><span class="o">-</span> <span class="p">.</span><span class="nf">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">postgresql</span><span class="o">/</span><span class="mi">9</span><span class="o">.</span><span class="mi">6</span><span class="o">/</span><span class="n">data</span><span class="ss">:/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">postgresql</span><span class="o">/</span><span class="n">data</span>
<span class="ss">redis:
image: </span><span class="n">redis</span>
<span class="ss">ports:
</span><span class="o">-</span> <span class="mi">6379</span><span class="p">:</span><span class="mi">6379</span>
<span class="ss">volumes:
</span><span class="o">-</span> <span class="p">.</span><span class="nf">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">redis</span><span class="o">/</span><span class="n">data</span><span class="ss">:/</span><span class="n">data</span>
<span class="ss">app:
build: </span><span class="o">&</span><span class="n">build</span> <span class="p">.</span>
<span class="nf">command</span><span class="p">:</span> <span class="n">rails</span> <span class="n">s</span> <span class="o">-</span><span class="n">b</span> <span class="mi">0</span><span class="o">.</span><span class="mi">0</span><span class="o">.</span><span class="mi">0</span><span class="o">.</span><span class="mi">0</span> <span class="o">-</span><span class="nb">p</span> <span class="mi">3000</span>
<span class="ss">tty: </span><span class="kp">true</span>
<span class="ss">volumes: </span><span class="o">&</span><span class="n">volumes</span>
<span class="o">-</span> <span class="o">.</span><span class="ss">:/</span><span class="n">app</span>
<span class="o">-</span> <span class="p">.</span><span class="nf">/</span><span class="n">config</span><span class="o">/</span><span class="n">database</span><span class="p">.</span><span class="nf">yml</span><span class="p">.</span><span class="nf">dev</span><span class="ss">:/</span><span class="n">app</span><span class="o">/</span><span class="n">config</span><span class="o">/</span><span class="n">database</span><span class="p">.</span><span class="nf">yml</span>
<span class="ss">ports:
</span><span class="o">-</span> <span class="mi">3000</span><span class="p">:</span><span class="mi">3000</span>
<span class="ss">environment: </span><span class="o">&</span><span class="n">environment</span>
<span class="no">DB_USERNAME</span><span class="p">:</span> <span class="n">postgres</span>
<span class="no">DB_PASSWORD</span><span class="p">:</span>
<span class="ss">links: </span><span class="o">&</span><span class="n">links</span>
<span class="o">-</span> <span class="n">postgres</span>
<span class="o">-</span> <span class="n">redis</span>
<span class="o">-</span> <span class="n">worker</span>
<span class="ss">worker:
build: </span><span class="o">*</span><span class="n">build</span>
<span class="ss">command: </span><span class="n">bundle</span> <span class="nb">exec</span> <span class="n">sidekiq</span>
<span class="ss">volumes: </span><span class="o">*</span><span class="n">volumes</span>
<span class="ss">environment: </span><span class="o">*</span><span class="n">environment</span>
<span class="ss">links:
</span><span class="o">-</span> <span class="n">postgres</span>
<span class="o">-</span> <span class="n">redis</span>
<span class="ss">test:
build: </span><span class="o">*</span><span class="n">build</span>
<span class="ss">command: </span><span class="n">guard</span>
<span class="ss">tty: </span><span class="kp">true</span>
<span class="ss">volumes: </span><span class="o">*</span><span class="n">volumes</span>
<span class="ss">environment:
</span><span class="o"><<</span><span class="p">:</span> <span class="o">*</span><span class="n">environment</span>
<span class="no">RAILS_ENV</span><span class="p">:</span> <span class="nb">test</span>
<span class="ss">links:
</span><span class="o">-</span> <span class="n">postgres</span>
<span class="o">-</span> <span class="n">redis</span>
</pre></td></tr></tbody></table>
</div>
</div>
<h3 id="building-the-image">Building the image</h3>
<p>Whenever you perform a change to the <code class="highlighter-rouge">Gemfile</code> or want to update the
container, run this command:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>docker-compose build app test worker
</pre></td></tr></tbody></table>
</div>
</div>
<h2 id="development">Development</h2>
<p>In development you can now start the whole environment specified in
the <code class="highlighter-rouge">docker-compose.yml</code> file via one simple command:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>docker-compose run -p 3000:3000 app
</pre></td></tr></tbody></table>
</div>
</div>
<p>This will start the app container and map the port 3000 to your local
port 3000.</p>
<p>You might ask yourself, why I do not use the <code class="highlighter-rouge">docker-compose up</code>
command. With <code class="highlighter-rouge">run</code> we have a better terminal output. In development
we might want to use <code class="highlighter-rouge">pry</code> to debug certain scenarios or use similar
gems. With <code class="highlighter-rouge">docker-compose up</code> I had problems with prompts, which were
absent with <code class="highlighter-rouge">docker-compose run</code>. The only drawback is that we have to
specify the port mappings manually for <code class="highlighter-rouge">docker-compose run</code>.</p>
<h3 id="common-commands--tasks">Common Commands & Tasks</h3>
<table>
<thead>
<tr>
<th>Common Command</th>
<th>With <code class="highlighter-rouge">docker-compose</code></th>
</tr>
</thead>
<tbody>
<tr>
<td><code class="highlighter-rouge">bundle install</code></td>
<td><code class="highlighter-rouge">bundle install; docker-compose build app worker test</code></td>
</tr>
<tr>
<td><code class="highlighter-rouge">rails s</code></td>
<td><code class="highlighter-rouge">docker-compose run -p 3000:3000 app</code></td>
</tr>
<tr>
<td><code class="highlighter-rouge">rake</code></td>
<td><code class="highlighter-rouge">docker-compose run app rake</code></td>
</tr>
<tr>
<td><code class="highlighter-rouge">tail -f log/development.log</code></td>
<td><code class="highlighter-rouge">docker-compose logs app</code></td>
</tr>
<tr>
<td><code class="highlighter-rouge">RAILS_ENV=test rake db:create</code></td>
<td><code class="highlighter-rouge">docker-compose run test rake db:create</code></td>
</tr>
</tbody>
</table>
<h3 id="running-tests">Running Tests</h3>
<p>We use guard to watch our source files and run the respective tests
automatically. When you look into the <code class="highlighter-rouge">docker-compose.yml</code> you will
find a service called <code class="highlighter-rouge">test</code>. This runs the guard server and can be
started like this:</p>
<div class="highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1</pre></td><td class="code"><pre>docker-compose run test
</pre></td></tr></tbody></table>
</div>
</div>
<p>Guard listens for file changes in your project directory, so when you
change a source file, the respective test is executed instead of the
whole test suite.</p>
<h3 id="webpacker-with-docker-compose">Webpacker With Docker-Compose</h3>
<p>For our latest Rails 5.1 projects, we have been using webpacker quite
successfully. In development you have to start the
<code class="highlighter-rouge">webpacker-dev-server</code>. Normally this should be accessible by the
Rails development server but also from outside to load the assets
from.</p>
<p>For that we had to adjust the hostname for the rails server in
<code class="highlighter-rouge">config/webpacker.yml</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9
10
11</pre></td><td class="code"><pre><span class="c1"># ...</span>
<span class="s">development</span><span class="pi">:</span>
<span class="s"><<</span><span class="pi">:</span> <span class="nv">*default</span>
<span class="s">compile</span><span class="pi">:</span> <span class="s">true</span>
<span class="s">dev_server</span><span class="pi">:</span>
<span class="s">host</span><span class="pi">:</span> <span class="s">webpacker</span>
<span class="s">port</span><span class="pi">:</span> <span class="s">3035</span>
<span class="s">hmr</span><span class="pi">:</span> <span class="s">true</span>
<span class="s">https</span><span class="pi">:</span> <span class="s">false</span>
<span class="c1"># ...</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>And add a linked service to the <code class="highlighter-rouge">docker-compose.yml</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><table style="border-spacing: 0"><tbody><tr><td class="gutter gl" style="text-align: right"><pre class="lineno">1
2
3
4
5
6
7
8
9</pre></td><td class="code"><pre><span class="s">webpacker</span><span class="pi">:</span>
<span class="s">build</span><span class="pi">:</span> <span class="nv">*build</span>
<span class="s">command</span><span class="pi">:</span> <span class="s">bin/webpack-dev-server --host localhost</span>
<span class="s">entrypoint</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span>
<span class="s">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">3035:3035</span>
<span class="s">volumes</span><span class="pi">:</span> <span class="nv">*volumes</span>
<span class="s">environment</span><span class="pi">:</span>
<span class="s">RAILS_ENV</span><span class="pi">:</span> <span class="s">development</span>
</pre></td></tr></tbody></table>
</div>
</div>
<p>The <code class="highlighter-rouge">--host localhost</code> option makes sure, that the assets are loaded
from <code class="highlighter-rouge">localhost</code> and the hot-module refresh connects to the correct
host (in that case the forwarded port <code class="highlighter-rouge">3035</code> on <code class="highlighter-rouge">localhost</code>).</p>
<p>Remember to add the <code class="highlighter-rouge">webpacker</code> service to <code class="highlighter-rouge">app → links</code> definitions so that
the <code class="highlighter-rouge">webpacker-dev-server</code> is startet automatically with <code class="highlighter-rouge">docker-compose
run -p 3000:3000 app</code>.</p>
<h2 id="deployment-possibilities">Deployment Possibilities</h2>
<p>Normally we integrate the deployment in our continuous integration
system. Whenever we push to the main repository a job builds the
image, runs all tests and when everything is green it pushes the image
to the repository. Depending on the branch and tag for the commit, it
then gets deployed to the respective environment.</p>
<p>You could deploy to a variety of servers and cloud platforms:</p>
<h3 id="single-host---docker-compose">Single Host - <code class="highlighter-rouge">docker-compose</code></h3>
<p>The simplest scenario would be to deploy to a dedicated server which
has Docker and <code class="highlighter-rouge">docker-compose</code> installed.</p>
<p>In this scenario you have to create a <code class="highlighter-rouge">docker-compose.yml</code> file, for
instance at <code class="highlighter-rouge">/srv/docker-compose.yml</code>, adjust some port mappings and
maybe add an nginx reverse proxy.</p>
<p>Using the <code class="highlighter-rouge">restart: always</code> directive, <code class="highlighter-rouge">docker-compose</code> would take care
of restarts for your services.</p>
<p>Further reading: <a href="https://docs.docker.com/compose/">docs.docker.com</a></p>
<h3 id="dokku">Dokku</h3>
<p>Dokku is a lightweight PaaS solution. You can install it on a server
and manage deployments with Dokku.</p>
<p>Further reading: <a href="http://dokku.viewdocs.io/dokku/">dokku.viewdocs.io</a></p>
<h3 id="amazon-aws-elastic-beanstalk">Amazon AWS Elastic Beanstalk</h3>
<p>Amazon Web Services provide a large toolbelt to host, scale and
manage cloud infrastructure at Amazon data centers. Elastic Beanstalk
is our favourite tool to quickly deploy docker environments to AWS EC2
instances.</p>
<p>It provides a simple command-line interface (<code class="highlighter-rouge">awsebcli</code>) on top of the
basic AWS CLI tool (<code class="highlighter-rouge">awscli</code>). Everything you could do with the AWS
Management Console, you can do with the command-line interface as
well.</p>
<p>We have written a blog
post
<a href="/admin/tech/ruby/rails/docker/2017/11/06/rails-in-docker-on-aws-elastic-beanstalk.html">“Rails in Docker via AWS Elastic Beanstalk”</a>,
which outlines how you can deploy a dockerized Rails app to AWS with
auto-scaling and load-balancing in a few minutes.</p>
<p>Further reading: <a href="https://aws.amazon.com">aws.amazon.com</a></p>
<h3 id="heroku">Heroku</h3>
<p>Heroku is a Cloud Service Provider with very easy command-line
utilities to quickly deploy applications to production environments.</p>
<p>Further reading: <a href="https://heroku.com">heroku.com</a></p>
<h2 id="conclusion">Conclusion</h2>
<p>Docker helps packaging applications in easily deployable containers
not only for production, but for development environments as well.</p>
<p>I hope this article could help you to gain a good understanding of the
benefits and some of the pitfalls of Rails inside Docker and that you can
evaluate whether Docker is something that’s worth to look into for
your project too.</p>Arthur AndersenDocker helps packaging software into reusable containers. This allows
not only for standardized development environment across your team,
but also for scaling your production deployments on most major cloud
platforms. This post explains how to dockerize a Ruby on Rails app
with PostGreSQL, Redis and Sidekiq.Iteration and Integration2017-10-04T00:00:00+00:002017-10-04T00:00:00+00:00https://blog.velalu.qa/product/2017/10/04/iteration-and-integration<p>In the article <a href="/product/2017/09/04/why-an-mvp.html">“Why an MVP?”</a> I
described what an MVP is and what the development process looks like.
This is based on the idea that there should be no assumptions without
data to support them. The same holds true during the development
process.</p>
<!-- split -->
<h2 id="rigid-methodologies">Rigid Methodologies</h2>
<p>Before the advent of agile software development methodologies the
usual practice was to acquire all requirements beforehand and plan for
all eventualities that may arise during the development of the actual
software product.</p>
<p>This method had its merits in manufacturing processes of physical
products, where the cycle from inception of an idea to holding the
artifact in hand is quite long and trial and error would be very
expensive.</p>
<h2 id="example-waterfall">Example: Waterfall</h2>
<p>An example for such a rigid process is the waterfall model: The client
sits down with the development team and creates a document with all
requirements before development begins. This allows for estimation and
discussions. During development the client is not involved.</p>
<p>The usual steps are:</p>
<ol>
<li>System and software requirements are captured in a product requirement document.</li>
<li>Analysis of the requirements result in models, schema and business rules</li>
<li>Design of the resulting software architecture</li>
<li>Implementation of the software</li>
<li>Testing via systematic debugging</li>
<li>Operations: installation, migration, support, and maintenance of complete systems</li>
</ol>
<p><img src="/assets/images/posts/2017-10-04-iteration-and-integration/waterfall.svg" alt="" class="rounded" /></p>
<p>The waterfall model states that one should not move to the next phase
unless its preceding phase is reviewed and verified.</p>
<h2 id="the-agile-model">The Agile Model</h2>
<p>Agile models introduce the same ideas that we discussed in
the <a href="/product/2017/09/04/why-an-mvp.html">“Why an MVP?”</a> article. You
might not know how the finished product should look like. So you want
to be kept in the loop repeatedly during the development to adjust the
direction of the development.</p>
<p>Thus, agile methodologies proclaim an incremental development model.
Essentially the phases of the waterfall model are incorporated in a
cyclic iteration in which you go through all the steps:</p>
<ol>
<li>Initial planning of the project</li>
<li>Communication to refine software requirements</li>
<li>Analysis of the requirements, incorporating new findings into the planning</li>
<li>Design of the resulting software architecture for the upcoming iteration</li>
<li>Implementation of the iteration’s milestone</li>
<li>Integration of the iterations increment</li>
</ol>
<p>Within this cycle step 2 to 5 are to be repeated until “finished”.</p>
<p><img src="/assets/images/posts/2017-10-04-iteration-and-integration/iterating.svg" alt="" class="rounded" /></p>
<h2 id="iteration">Iteration</h2>
<p>An iteration is an interval of fixed length to work on the core values
of the product. A successful iteration produces a working increment of
the software product which moves you closer to your desired outcome.</p>
<h3 id="planning">Planning</h3>
<p>The length of such an iteration depends on the size of your team and
the speed of the development. The shorter the intervals, the larger
the meeting overhead, since you would have to discuss upcoming and
retrospective iterations more often. In the beginning it might be
useful to have a shorter interval, to allow for quicker changes in
direction. After a few iterations you can increase the interval length
as your business goals should be more clear to the all team members.</p>
<p>Before an iteration begins, you would have to describe the tasks that
should be done. These become part of your “backlog” from which we can
choose a certain amount of tasks for upcoming iterations.</p>
<p>As described previously, each iteration should produce an increment that
can be deployed to actual users, so you would have to prioritize tasks
that can be done within an iteration. If you have large features that
might not be doable within one iteration, try to split them up, giving
the user a smaller version of the desired functionality. This way you
are thinking in terms of “launchability” from the get-go.</p>
<p>When planning the upcoming iteration you sit with a developer and
discuss which tasks can be done within the next iteration. As with
every estimate, this is not to agree on strict outcome, but to find a
shared understanding of what to expect in the end of the iteration.</p>
<p>This expectation should be described in a “definition of done”. For
that we like to use the Gerkhin language, which is a “business
readable, domain-specific language” that lets us describe software
behaviour without detailing how that behaviour is implemented.</p>
<h3 id="integration--testing">Integration & Testing</h3>
<p>When a task is done the work should be integrated into a running
environment. At first for testing purposes this might be a staging
environment, where you can manually test the functionality of your
software.</p>
<p>If the work does not meet the “definition of done”, you can give
feedback for the developer, so she can adjust accordingly. If it does
match the “definition of done” you can create a new task in the
backlog, that describes your changes necessary in upcoming iterations.</p>
<p>It may happen, that you find bugs when testing. If you do, you create
new tasks describing the issue. The description should provide details
of the expected behaviour, the actual behaviour, the steps to
reproduce the issue and the specific acceptance criteria that the
behaviour violates. Usually you could reference a specific scenario
from the feature definitions.</p>
<p>In the end of the iteration, when everything is tested and the work
has been approved for the increment, the software can be deployed to
the production environment.</p>
<h2 id="tldr">TL;DR</h2>
<p>To ensure the success of your project and to stay in budget, you
should iterate towards your goals in managable steps. Working with a
team that focusses on good practices and is able to adjust to your
needs is essential.</p>Arthur AndersenIn the article “Why an MVP?” I
described what an MVP is and what the development process looks like.
This is based on the idea that there should be no assumptions without
data to support them. The same holds true during the development
process.