Monday, January 18, 2016

Automating deployment of Lektor blog sites.

Towards the end of last year, Armin Ronacher formally announced a new project of his called Lektor. Armin is one of those developers who when he creates some new piece of software, or publishes a blog post, you should alway pay attention. His knowledge and attention to detail is something everyone should aspire to. So ever since he announced Lektor I have been aiming to put aside some time to have a play with it. I am hoping one day I can stop using Blogger for my own blog site and use something like Lektor instead. That isn’t to say Lektor is only for blog sites, it can be used for any sites where ultimately you could host them as a plain static web site and don’t need to run a full on dynamic web site framework.

Although Lektor itself handles well the task of generating the static content for a web site and has some support for deploying the generated files, I am not too keen on any of the deployment options currently provided. I thought therefore I would have a bit of a play with automated deployment of a Lektor blog using the Open Source Source to Image project for Docker and also OpenShift. The goal I initially wanted to achieve was that simply by pushing completed changes to my blog site to a GitHub repository that my web site would be automatically updated. If that worked out okay, then the next step would be to work out how I could transition my existing blog posts off Blogger, including how to implement redirects if necessary so existing URLs would still work and map into any new URL naming convention for the new design.

Creating an initial site template

Lektor more than adequately covers the creation of an initial empty site template in its quick start guide so I will not cover that here.

My only grumble with that process was that it doesn’t like to have the directory you want to use in existence already. If you try and use an existing directory you get an error like:

Error: Could not create target folder: [Errno 17] File exists: ‘/Users/graham/Projects/lektor-empty-site'
Aborted!

For me this came about because I created a repository on GitHub first, made a local checkout and then tried to populate it such that everything was at the top level of the repository rather than in a sub directory. Lektor doesn’t like this. You therefore either have to create the project in a subdirectory and then move everything manually to the top level, or create the project first and then only do ‘git init’ and link it to the remote GitHub repository. If there is a way of using Lektor such that I could have populated the current directory rather than having to use a new directory, then do please let me know.

Having created the initial site template, we can then go in and make our modifications, using ‘lektor server’ on our local machine to view our updates as we make them.

Deploying static site content

When it comes to deploying a site created using Lektor, that is where you need to move beyond the inbuilt server it provides. Because Lektor generates purely static content, you don’t need a fancy dynamic web application server and any web server capable of hosting static files will do.

There are certainly any number of hosting services still around who will host a static web site for you, or you could use S3 or GitHub pages, but I wanted something which I had a bit more control over and visibility into when there is an issue. I also don’t really want to be pushing from my local machine direct to the hosting service either. I like the idea of a workflow where things go via the Git repository where the original files for the site are located. This would allow me to coordinate with others working on a site as well, using all the same sorts of workflows one would use for normal software development, such as branching, to handle working on and finally release of the content for the site.

For hosting of minimal hand crafted static sites I have in the past used the free tiers of some of the more popular platform as a service offerings (PaaS), but because these services have traditionally been biased towards dynamic web applications, that meant wrapping up the static content within a custom Python web application, using something like the WhiteNoise WSGI middleware to handle serving the static file content.

This works, but you aren’t using a proper web server designed for static file hosting, so it isn’t the best option for a more significant site which needs to handle a lot of traffic.

What could I do then if I want to use a proper web server such as Apache and nginx?

The problem in using a traditional PaaS is that in general they do not provide either Apache or nginx as part of their standard environment and they can make it very difficult to actually install it. Alternatively, they might use Apache, but because of a fixed configuration and no ability to change it, you can’t just drop static files in and have them appear at the URL you would prefer to have them.

Using a Docker based PaaS

Now these days, because of my work with Red Hat, I get a lot of opportunity to play with Docker and Red Hat’s newest incarnation of their PaaS offering. This is OpenShift 3 and it is a complete rewrite of the prior version of OpenShift as most would know it. In OpenShift 3 Docker is used, instead of a custom container solution, with Kubernetes handling scheduling of those containers.

Because OpenShift 3 is Docker based, this means one has much greater control over what you can deploy to a container. So where as with a traditional PaaS your options may have been limited, with OpenShift 3 and Docker, you can pretty well do what ever you want in the container and use whatever software or web server you want to use.

Given the ability to use Docker, I could therefore consider setting up a traditional Apache or nginx web server. If I were to go down that path there are even existing Docker images for both Apache and nginx on the Docker Hub registry for hosting static web sites.

The problem with using such existing Docker images though is that when using Lektor, you need to trigger a build step to generate the static files from the original source files. This requires having Lektor installed to run the build step, which also means having a working Python installation as well. These base images for Apache and nginx aren’t general purpose images though and are not going to have Python installed. As a result, the generation of the static files would need to be done using a separate system first before then somehow being combined with the base image.

The alternative is to start out with one of the web server images and create a new base image based on it which adds Python and Lektor. Conversely, you could start out with a base image for Python and then install Lektor and either Apache or nginx.

With a base image which then incorporated both the web server and Lektor, and a default ‘RUN’ action to start the web server, you could within the Lektor project for your blog site add a ‘Dockerfile’ which ran the ‘lektor build’ to generate the static content as part of the build for the Docker image.

No matter what route you take here, they all seem a bit fiddly and would still entail a fair bit of work to get some sort of automated workflow going around them.

Builds using Source to Image

As it turns out, an OpenSource project already exists which has done a lot of the work to build that workflow. It is the project called Source to Image (S2I).

If you are familiar with the concept of build packs or cartridges as they existed for traditional PaaS offerings, think of S2I as the Docker replacement for those.

The idea behind S2I is that you have a Docker image which defines what is called a builder. This is effectively a Docker base image that combines all the common stuff that would be required for deploying software for a given problem domain, for example Python web applications, along with a special ‘assemble’ script which takes your source files and combines them with the base image to create a new Docker image to be run as the actual application.

When combining the source files with the base image, if they are actual application code files, they might be compiled into an application executable, or if using a scripting language simply copied into place to be executed by an application web server. Alternatively, the source files could be some sort of data input files that are to be used directly by an application, or after some translation process has been done. In other words, you aren’t restricted to using S2I builders just to create a new application. Instead an S2I builder could be used to combine a SaaS (Software as a Service) like application with the data it needs to run.

What ever the purpose of the builder and the resulting application, a further key component supplied by the S2I builder is a ‘run’ script. It is this script which is executed when the Docker image is run and which starts up the actual application.

So an S2I builder contains all the base software components that would be required for an application, plus the ‘assemble’ and ‘run’ scripts defining how the source code is combined with the builder image and then subsequently how to start the application.

What isn’t obvious is how our source files gets copied in as part of this process. This is where the ‘s2i’ program from the Source to Image package comes into play. It is this which takes the source code, injects it into our running S2I builder, triggers the ‘assemble’ script and then snapshots the container to create a new Docker image.

To make things a little clearer, lets try an example.

For this I am going to use an S2I builder which has been created for use with OpenShift for deploying Python web applications. This S2I builder can be found on the Docker Hub registry and is called ‘openshift/python-27-centos7’.

In using the ‘s2i’ program there are two ways that you can supply your source files. The first is to point at a remote Git repository hosted somewhere like GitHub. The second is to point at a local file system directory containing the source files.

In this case I am going to use the repository on GitHub located at:

  • https://github.com/GrahamDumpleton/wsgi-hello-world

The ‘s2i’ program is now run, supplying it the location of the source files, the name of the S2I builder image on the Docker Hub registry and the name to be given to the Docker image produced and which will contain our final application.

$ s2i build https://github.com/GrahamDumpleton/wsgi-hello-world.git openshift/python-27-centos7 my-python-app
---> Copying application source ...
---> Installing dependencies ...
Downloading/unpacking gunicorn (from -r requirements.txt (line 1))
Installing collected packages: gunicorn
...
Successfully installed gunicorn
Cleaning up...

$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
my-python-app latest beda88ceb3ad 14 minutes ago 444.1 MB

With the build complete, we can now run our application.

$ docker run --rm -p 8080:8080 my-python-app
---> Serving application with gunicorn (wsgi) ...
[2016-01-17 10:49:58 +0000] [1] [INFO] Starting gunicorn 19.4.5
[2016-01-17 10:49:58 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2016-01-17 10:49:58 +0000] [1] [INFO] Using worker: sync
[2016-01-17 10:49:58 +0000] [30] [INFO] Booting worker with pid: 30

and access it using ‘curl’ to validate it works.

$ curl $(docker-machine ip default):8080
Hello World!

Important here to understand is that it wasn’t necessary to define how to create the Docker image. That is, all the WSGI 'Hello World’ Git repository contained was:

$ ls -las
total 40
0 drwxr-xr-x 8 graham staff 272 7 Jan 19:21 .
0 drwxr-xr-x 71 graham staff 2414 17 Jan 14:05 ..
0 drwxr-xr-x 15 graham staff 510 17 Jan 15:18 .git
8 -rw-r--r-- 1 graham staff 702 6 Jan 17:07 .gitignore
8 -rw-r--r-- 1 graham staff 1300 6 Jan 17:07 LICENSE
8 -rw-r--r-- 1 graham staff 163 6 Jan 17:09 README.rst
8 -rw-r--r-- 1 graham staff 9 6 Jan 21:05 requirements.txt
8 -rw-r--r— 1 graham staff 278 6 Jan 17:09 wsgi.py

There was no ‘Dockerfile’. It is the ‘s2i’ program in combination with the S2I builder image which does all this for you.

An S2I builder for static hosting

As you can see from above, the S2I concept already solves some of our problems of how to manage the workflow for creating a Docker image which contains our web site built using Lektor.

The part of the puzzle we still need though is a Docker base image which combines both a web server and Python runtime and which we can add in the ‘assemble’ and ‘run’ scripts to create a S2I builder image.

This is where I am going to cheat a little bit.

This is because although I demonstrated an S2I builder for Python above, I actually have my own separate S2I builder for Python web applications. My own S2I builder is more flexible in its design than the OpenShift S2I builder. One of the things it supports is the use of Apache/mod_wsgi for hosting a Python web application. To do this it is using ‘mod_wsgi-express’.

One of features that ‘mod_wsgi-express’ so happens to have is an easy ability to host static files using Apache in conjunction with your Python web application. It even has a mode whereby you can say that you are only hosting static files and don’t actually have a primary Python web application.

So although primarily designed for hosting Python web applications, my existing S2I builder for Python web applications provides exactly what we need in this case. That is, it combines in one base image a Python runtime, along with Apache, as well as an easy way to start Apache against static file content.

If we were running on our normal machine at this point and not using Docker, the steps required to build our static files from our Lektor project and host it using ‘mod_wsgi-express’ would be as simple as:

$ lektor build --output-path /tmp/data
Started build
U index.html
U about/index.html
U projects/index.html
U blog/index.html
U static/style.css
U blog/first-post/index.html
Finished build in 0.07 sec
Started prune
Finished prune in 0.00 sec
 
$ mod_wsgi-express start-server --application-type static --document-root /tmp/data
Server URL : http://localhost:8000/
Server Root : /tmp/mod_wsgi-localhost:8000:502
Server Conf : /tmp/mod_wsgi-localhost:8000:502/httpd.conf
Error Log File : /tmp/mod_wsgi-localhost:8000:502/error_log (warn)
Request Capacity : 5 (1 process * 5 threads)
Request Timeout : 60 (seconds)
Queue Backlog : 100 (connections)
Queue Timeout : 45 (seconds)
Server Capacity : 20 (event/worker), 20 (prefork)
Server Backlog : 500 (connections)
Locale Setting : en_AU.UTF-8

We could then access our web site created by Lektor at the URL ‘http://localhost:8000/'.

Even though this appears so simple, it is actually running a complete instance of Apache. It is this easy because ‘mod_wsgi-express’ does all the hard work of automatically generating the Apache configuration files to use for this specific site based only on the command line arguments provided. The configuration files for this instance are all generated totally independent of any existing Apache configuration you may have for the main Apache instance on your machine and so will not interfere with it.

An S2I builder for Lektor

In order to now create our S2I builder for Lektor, we are going to build on my existing S2I builder base image for Python web applications. I don’t believe I have specifically blogged about my S2I builder for Python before, although I have mentioned before some of the work I have been doing on Docker base images for Python web applications.

The existing Docker base image for Python web applications is on the Docker Hub registry as ‘grahamdumpleton/mod-wsgi-docker’. As to the S2I builder support I have been working on, this has been rolled into that same image, although if wishing to use it as an S2I builder you will need to instead use ‘grahamdumpleton/mod-wsgi-docker-s2i’. This latter image is pretty minimal and just sets the exposed ‘PORT’ and ‘USER’. 

# grahamdumpleton/mod-wsgi-docker-s2i:python-2.7

FROM grahamdumpleton/mod-wsgi-docker:python-2.7
USER 1001
EXPOSE 80
CMD [ "/usr/local/s2i/bin/usage" ]

For our Lektor S2I builder image, what we are now going to use is the following ‘Dockerfile’.

# grahamdumpleton/s2i-lektor:1.1

FROM grahamdumpleton/mod-wsgi-docker-s2i:python-2.7
RUN pip install Lektor==1.1
COPY .whiskey /app/.whiskey/

This ‘Dockerfile’ only does two additional things on top of the underlying S2I builder for Python. The first is to install Lektor and the second is to copy in some extra files into the Docker image. Those extra files are:

.whiskey/server_args
.whiskey/action_hooks/build

What you will note is that we aren’t actually adding any ‘assemble’ or ‘run’ scripts as we have talked about. This is because these already exist in the base image and already do everything we need to prepare the image and then start up a web server for us.

Different to how the OpenShift S2I Python builder is designed, the ‘assemble’ and ‘run’ scripts here are designed with a means for application specific hooks to be supplied to perform additional steps at the time of building the image or deploying the application. This is what these two files are about that we copied into the image.

Of these, the ‘.whiskey/action_hooks/build’ file is a shell script which is invoked by the ‘assemble’ script during the build of the Docker image. What it contains is:

#!/usr/bin/env bash

lektor build --output-path /data

This will be run by the ‘assemble’ script in the same directory as the source files that were copied into the image from either the local source directory or the remote Git repository.

This script therefore is what is going to trigger Lektor to generate the static files for our site. The files will be generated into the ‘/data’ directory.

The second file called ‘.whiskey/server_args’ contains:

--application-type static --document-root /data

With the way that the base image is setup, and ‘run’ called when the Docker image is started, it will by default automatically run up ‘mod_wsgi-express’. It will do this with a number of default options which are required when running ‘mod_wsgi-express’ in a Docker container, such as directing logging to the terminal so that Docker can capture it. What the ‘server_args’ file does is allow us to supply any additional options to ‘mod_wsgi-express’. In this case we are giving it options to specify that it is to host static files with no primary Python WSGI application being present, where the static files are located in the ‘/data’ directory.

And that is all there is to it. Because the base image is already doing lots of magic, we only had to provide the absolute minimum necessary, taking advantage of the fact that the base image is already employing all necessary best practices and smarts to make things work.

For the complete source code for this S2I builder image for Lektor you can see:

  • https://github.com/GrahamDumpleton/s2i-lektor

A Docker image corresponding to Lektor 1.1 is also already up on the Docker Hub registry as ‘grahamdumpleton/s2i-lektor:1.1’. As such, we can now run ‘s2i’ as:

$ s2i build https://github.com/GrahamDumpleton/lektor-empty-site.git grahamdumpleton/s2i-lektor:1.1 my-lektor-site
---> Installing application source
---> Building application from source
-----> Running .whiskey/action_hooks/build
Started build
U index.html
U about/index.html
U projects/index.html
U blog/index.html
U static/style.css
U blog/first-post/index.html
Finished build in 0.08 sec
Started prune
Finished prune in 0.00 sec
$ docker run --rm -p 8080:80 my-lektor-site
---> Executing the start up script
[Sun Jan 17 12:28:03.698888 2016] [mpm_event:notice] [pid 17:tid 140541365122816] AH00489: Apache/2.4.18 (Unix) mod_wsgi/4.4.21 Python/2.7.11 configured -- resuming normal operations
[Sun Jan 17 12:28:03.699328 2016] [core:notice] [pid 17:tid 140541365122816] AH00094: Command line: 'httpd (mod_wsgi-express) -f /tmp/mod_wsgi-localhost:80:1001/httpd.conf -E /dev/stderr -D MOD_WSGI_STATIC_ONLY -D MOD_WSGI_MPM_ENABLE_EVENT_MODULE -D MOD_WSGI_MPM_EXISTS_EVENT_MODULE -D MOD_WSGI_MPM_EXISTS_WORKER_MODULE -D MOD_WSGI_MPM_EXISTS_PREFORK_MODULE -D FOREGROUND'

Testing our site with ‘curl’ we get:

$ curl $(docker-machine ip default):8080
<!doctype html>
<meta charset="utf-8">
<link rel="stylesheet" href="./static/style.css">
<title>Welcome to Empty Site! — Empty Site</title>
<body>
<header>
<h1>Empty Site</h1>
<nav>
<ul class="nav navbar-nav">
<li class="active"><a href="./">Welcome</a></li>
<li><a href="./blog/">Blog</a></li>
<li><a href="./projects/">Projects</a></li>
<li><a href="./about/">About</a></li>
</ul>
</nav>
</header>
<div class="page">
<h2>Welcome to Empty Site!</h2>
<p>This is a basic demo website that shows how to use Lektor for a basic
website with some pages and a blog.</p>

</div>
<footer>
&copy; Copyright 2016 by Graham Dumpleton.
</footer>
</body>

Integration with OpenShift

As seen, the S2I system gives us a really easy way to produce a Docker image, not only for your own custom Python web application where you provide the source code, but also scenarios where you might be simply using existing data with an existing application. We did something like the latter with Lektor, although we actually also generated the required data to be hosted by the web server as part of the build process.

When running the ‘s2i’ program we were also able to use source files in a local directory, or from a remote Git repository. Even so, this still only gives us a Docker image and we would need to host that somewhere.

For most Docker based deployment systems, this would entail needing to push your Docker image from your own system, or a CI/CD system, to a Docker registry. The hosting service would then need to pull that image from the Docker registry in order to deploy it as a live web application.

If however using the latest OpenShift things are even simpler. This is because OpenShift integrates support for S2I.

Under OpenShift, all I need to do to deploy my Lektor based blog site is:

$ oc new-app grahamdumpleton/s2i-lektor:1.1~https://github.com/GrahamDumpleton/lektor-empty-site.git --name blog
--> Found Docker image a95cedc (17 hours old) from Docker Hub for "grahamdumpleton/s2i-lektor:1.1"
* An image stream will be created as "s2i-lektor:1.1" that will track this image
* A source build using source code from https://github.com/GrahamDumpleton/lektor-empty-site.git will be created
* The resulting image will be pushed to image stream "blog:latest"
* Every time "s2i-lektor:1.1" changes a new build will be triggered
* This image will be deployed in deployment config "blog"
* Port 80/tcp will be load balanced by service "blog"
--> Creating resources with label app=blog ...
ImageStream "s2i-lektor" created
ImageStream "blog" created
BuildConfig "blog" created
DeploymentConfig "blog" created
Service "blog" created
--> Success
Build scheduled for "blog" - use the logs command to track its progress.
Run 'oc status' to view your app.

$ oc expose service blog
route "blog" exposed

I can then access the blog site at the host name which OpenShift has assigned it. If I have my own host name, then I just need to edit the route which was created to make the blog site public to add in my own host name instead.

In this case I needed to use the OpenShift command line tool to create my blog site, but we can also load up a definition into our OpenShift project which will allow us to build our blog site direct from the OpenShift UI.

This definition is provided as part of the ‘s2i-lektor’ project on GitHub and so to load it we just run:

$ oc create -f https://raw.githubusercontent.com/GrahamDumpleton/s2i-lektor/master/lektor.json
imagestream "lektor" created

If we now go to the OpenShift UI for our project we have the option of adding a Lektor based site.

Openshift add to project lektor

Clicking through on the ‘lektor:1.1’ entry we can now fill out the details for the label to be given to our site and the location of the Git repository which contains the source files.

Openshift lektor parameters

Upon clicking on ‘Create’ it will then go off and build our Lektor site, including making it publicly accessible.

Openshift lektor service

By default only a single instance of our site will be created, but if it were an extremely popular site, then to handle all the traffic we would just increase the number of pods (instances) running. When a web application is scaled in this way, OpenShift will automatically handle all the load balancing of traffic across the multiple instances. We do not need to worry ourselves about needing to set up any front end router or deal with registration of the back end instances with the router.

When it comes to making changes to our site and redeploying it we have a few options.

Openshift lektor build

We could manually trigger a rebuild of the site through the UI or the command line after we have pushed up our changes to GitHub, or we could instead link the application in OpenShift with our GitHub repository. To do the latter we would configure a web hook into our repository on GitHub. What will happen then is that every time a change is made and pushed up to the Git repository, the application on OpenShift will be automatically rebuilt and redeployed for us.

We have now achieved the goal I was after and have a complete workflow in place. All that I would have to worry about is updating the content of the blog site and pushing up the changes to my Git repository when I am happy for them to be published.

Trying out OpenShift yourself

Although I showed a full end to end workflow combining Docker, S2I and OpenShift, if you aren’t interested in the OpenShift part you can definitely still use S2I with a basic Docker service. You would just need to incorporate it into an existing CI/CD pipeline.

If you are interested in the new OpenShift based on Docker and Kubernetes and want to experiment with it, then you have a few options. These are:

  • OpenShift Origin - This is the Open Source upstream project for the OpenShift products by Red Hat.
  • AWS Test Drive - This is an instance of OpenShift Enterprise which you can spin up and try on Amazon Web Services.
  • All In One VM - This is a self contained VM which you can spin up with VirtualBox on your own machine.

If you do decide to try OpenShift and my Lektor S2I builder do let me know. I also have an S2I builder for creating IPython notebook server instances as well. The IPython S2I builder can pull your notebooks and any files it needs from a Git repository just like how the Lektor S2I builder does for a Lektor site. It is also possible with the IPython images to spin up a backend IPython cluster with as many engines as you need if wishing to play around with parallel computing with ‘ipyparallel’.

Unfortunately right now the existing OpenShift Online PaaS offering from Red Hat is still the older OpenShift version so is not based around Docker and Kubernetes. Hopefully it will not be too much longer before a version of OpenShift Online using Docker and Kubernetes is available. That should make it a lot easier to experiment with the features of the new OpenShift and how easy it can be to get a web site hosted, like by Lektor example shown here.

Thursday, January 14, 2016

Python virtual environments and Docker.

When creating a Docker base image for running Python applications, you have various choices for how you can get Python installed. You can install whatever Python version is supplied by your operating system. You can use Python packages from separate repositories such as the Software Collections (SCL) repository for CentOS, or the dead snakes repository for Debian. Alternatively, you could install Python from source code.

Once you have your base image constructed, you then need to work out the strategy you are going to use for installing any Python modules you require in a derived image for your Python application. You could also source these from operating system package repositories, or you could instead install them from the Python Package Index (PyPi).

How you go about installing Python packages can complicate things though and you might get some unexpected results.

The purpose of this blog post is to go through some of the issues that can arise, what best practices are to deal with them and whether a Python virtual environment should be used.

Installing Python packages

There are two primary options for installing Python packages.

If you are using the operating system supplied Python installation, or are using the SCL repository for CentOS, then many Python modules may also be packaged up for those systems. You can therefore use the operating system packaging tools to install them that way.

The alternative is to install packages from PyPi using ‘pip’. So rather than coming from the operating system package repository as a pre-built package, the software for a package is pulled down from PyPi, unpacked, compiled if necessary and then installed.

Problems will arise though when these two different methods are used in the one Python installation.

To illustrate the problem, consider the scenario where you are using the SCL repository for CentOS.

Installing:
python27 x86_64 1.1-20.el7 centos-sclo-rh 4.8 k
Installing for dependencies:
dwz x86_64 0.11-3.el7 base 99 k
iso-codes noarch 3.46-2.el7 base 2.7 M
perl-srpm-macros noarch 1-8.el7 base 4.6 k
python27-python x86_64 2.7.8-3.el7 centos-sclo-rh 81 k
python27-python-babel noarch 0.9.6-8.el7 centos-sclo-rh 1.4 M
python27-python-devel x86_64 2.7.8-3.el7 centos-sclo-rh 384 k
python27-python-docutils noarch 0.11-1.el7 centos-sclo-rh 1.5 M
python27-python-jinja2 noarch 2.6-11.el7 centos-sclo-rh 518 k
python27-python-libs x86_64 2.7.8-3.el7 centos-sclo-rh 5.6 M
python27-python-markupsafe x86_64 0.11-11.el7 centos-sclo-rh 25 k
python27-python-nose noarch 1.3.0-2.el7 centos-sclo-rh 274 k
python27-python-pip noarch 1.5.6-5.el7 centos-sclo-rh 1.3 M
python27-python-pygments noarch 1.5-2.el7 centos-sclo-rh 774 k
python27-python-setuptools noarch 0.9.8-5.el7 centos-sclo-rh 400 k
python27-python-simplejson x86_64 3.2.0-3.el7 centos-sclo-rh 173 k
python27-python-sphinx noarch 1.1.3-8.el7 centos-sclo-rh 1.1 M
python27-python-sqlalchemy x86_64 0.7.9-3.el7 centos-sclo-rh 2.0 M
python27-python-virtualenv noarch 1.10.1-2.el7 centos-sclo-rh 1.3 M
python27-python-werkzeug noarch 0.8.3-5.el7 centos-sclo-rh 534 k
python27-python-wheel noarch 0.24.0-2.el7 centos-sclo-rh 76 k
python27-runtime x86_64 1.1-20.el7 centos-sclo-rh 1.1 M
redhat-rpm-config noarch 9.1.0-68.el7.centos base 77 k
scl-utils-build x86_64 20130529-17.el7_1 base 17 k
xml-common noarch 0.6.3-39.el7 base 26 k
zip x86_64 3.0-10.el7 base 260 k

The only package we installed here was ‘python27’, yet because of dependencies listed for that package, Python modules for Jinja2, Werkzeug, SQLAlchemy and others, often used in Python web applications, were also installed.

The reason this can be an issue is that versions of Python software from such repositories are often not the latest and are potentially quite out of date versions.

Take ‘Jinja2' for instance, the most up to date version available at this time from PyPi is version 2.8. The version which was installed when we installed the ‘python27’ package was a much older version 2.6.

Remember now that this Docker image is intended to be used as a base image and users will install any Python modules on top. If one of the Python modules they required was ‘Jinja2’ and they installed it, they may not get what they expect.

$ pip install Jinja2
Requirement already satisfied (use --upgrade to upgrade): Jinja2 in /opt/rh/python27/root/usr/lib/python2.7/site-packages
Cleaning up...
$ pip freeze
Babel==0.9.6
Jinja2==2.6
MarkupSafe==0.11
Pygments==1.5
SQLAlchemy==0.7.9
Sphinx==1.1.3
Werkzeug==0.8.3
docutils==0.11
nose==1.3.0
simplejson==3.2.0
virtualenv==1.10.1
wheel==0.24.0
wsgiref==0.1.2

What happened was that when ‘pip’ was run, it already found that ‘Jinja2’ had been installed and so skipped installing it again.

In the end, although the user was likely expecting to get the most up to date version of ‘Jinja2’, that isn’t what happened and they were left with version 2.6.

Because the fact that installing a newer version was skipped was only a warning and doesn’t create an error, the user would likely be oblivious to what happened. They would only find out when their application is running and starts misbehaving or giving errors due to their application being coded for the API of a newer version. 

Forced updates and pinning

One possible solution to this problem is to always ensure that you supply the ‘-U’ or ‘--upgrade’ option to ‘pip’ when it is run. This will force an update and reinstallation of the Python modules being installed to the latest version even if they are already installed.

$ pip install -U Jinja2
Downloading/unpacking Jinja2 from https://pypi.python.org/packages/2.7/J/Jinja2/Jinja2-2.8-py2.py3-none-any.whl#md5=75acb6f1abfc46ed75f4fd392f321ac2
Downloading Jinja2-2.8-py2.py3-none-any.whl (263kB): 263kB downloaded
Downloading/unpacking MarkupSafe from https://pypi.python.org/packages/source/M/MarkupSafe/MarkupSafe-0.23.tar.gz#md5=f5ab3deee4c37cd6a922fb81e730da6e (from Jinja2)
Downloading MarkupSafe-0.23.tar.gz
Running setup.py (path:/tmp/pip-build-Sd86kU/MarkupSafe/setup.py) egg_info for package MarkupSafe
Installing collected packages: Jinja2, MarkupSafe
Running setup.py install for MarkupSafe
building 'markupsafe._speedups' extension
...
Successfully installed Jinja2 MarkupSafe
Cleaning up...
 
$ pip freeze
Babel==0.9.6
Jinja2==2.8
MarkupSafe==0.23
Pygments==1.5
SQLAlchemy==0.7.9
Sphinx==1.1.3
Werkzeug==0.8.3
docutils==0.11
nose==1.3.0
simplejson==3.2.0
virtualenv==1.10.1
wheel==0.24.0
wsgiref==0.1.2

Although this will ensure we have the latest version, it has the potential to cause other problems.

The issue here is if a newer version of a package which was installed had a backwards incompatible change. This could cause a failure if not all other packages which used that package, which were already installed, weren’t also updated. If a command was then run which used the package that was not updated, then it would fail in trying to use the now incompatible package.

Pinning packages by specifying a specific version on the command line when running ‘pip’, or in a ‘requirements.txt’ file doesn’t really help either. This is because any update to a newer version, regardless of whether it is the latest or not, risks causing a failure of a Python module installed due to some dependency in an operating system package.

A further concern when using ‘pip’ to install a newer version of a Python module that already exists is the fact that you are replacing files which may have been installed by a package when using the system packaging tools. In general this wouldn’t be an issue when using Docker as you wouldn’t ever subsequently remove a package installed using the system packaging tools in the life of that Docker container. It is still though not ideal that you are updating the same Python module and files with different packaging tools.

The short of it is that it is simply bad practice to use ‘pip’ to install Python modules into a Python installation, setup using system packaging tools, which has the same Python modules already installed that you are trying to add.

Per user package directory

If installing Python modules using ‘pip' into the same directory as where there is an existing version installed using the system packaging tools is a problem, what about using the per user package directory?

When using ‘pip’ this can be achieved using the ‘--user’ option. There is similarly a ‘--user’ option that can be used when running a ‘setup.py’ file for a package when installing it.

In both cases, rather than the Python modules being installed into the main ‘site-packages’ directory of the Python installation, they are installed into a special directory in the users home directory. For Linux this directory is located at:

$HOME/.local/lib/pythonX.Y/site-packages

The ‘X.Y’ will depend on the version of Python being used.

Although this at least eliminates conflicts where ‘pip’ could replace files installed by the system packaging tools, it doesn’t resolve the issue with the version of a package being updated without again having to resort to using the ‘-U’ or ‘--update’ option to ‘pip’.

$ pip install --user Jinja2
Requirement already satisfied (use --upgrade to upgrade): Jinja2 in /opt/rh/python27/root/usr/lib/python2.7/site-packages
Cleaning up...

Per user package directories are therefore not really a solution either.

Python virtual environments

It is in part because of the problems arising when trying to use a single common Python installation with multiple applications, where different Python module versions were required, that the idea of an isolated Python virtual environment came about. The most popular such tool for creating an isolated Python environment is ‘virtualenv’.

Although the intent with Docker is that it would only hold and run the one application, could we still use a Python virtual environment anyway, thus avoiding the problems described.

$ virtualenv venv
New python executable in venv/bin/python2
Also creating executable in venv/bin/python
Installing Setuptools...done.
Installing Pip...done.
$ source venv/bin/activate
(venv)$ pip install Jinja2
Downloading/unpacking Jinja2
Downloading Jinja2-2.8.tar.gz (357kB): 357kB downloaded
Running setup.py egg_info for package Jinja2
Downloading/unpacking MarkupSafe (from Jinja2)
Downloading MarkupSafe-0.23.tar.gz
Running setup.py egg_info for package MarkupSafe
Installing collected packages: Jinja2, MarkupSafe
Running setup.py install for Jinja2
Running setup.py install for MarkupSafe
building 'markupsafe._speedups' extension
...
Successfully installed Jinja2 MarkupSafe
Cleaning up...

And the answer is a most definite yes.

The reason that the Python virtual environment works is because it creates its own fresh ‘site-packages’ directory for installing new Python modules which is independent of the ‘site-packages’ directory of the main Python installation.

$ pip freeze
Jinja2==2.8
MarkupSafe==0.23
wsgiref==0.1.2

The only Python packages that will be found in the ‘site-packages’ directory will be ‘pip’ itself, and the couple of packages it requires such as ‘setuptools’, and the additional packages that we install. There is therefore no danger of conflicts with any packages that were installed into the main Python installation by virtue of operating system packages.

Note that this wasn’t always the case. Originally the ‘virtualenv’ tool when creating a Python virtual environment would add to what was in the Python ‘site-packages’ directory, rather than overriding it. Back then it was necessary to provide the ‘--no-site-packages’ option to ‘virtualenv’ to have it work as it does now. The default was changed because the sorts of problems described here would still occur if the ‘site-packages’ directory wasn’t completely distinct.

The option does exist in ‘virtualenv’ to set up the virtual environment with the old behaviour by a command line option, but you really do not want to go there.

And if you are wondering why ‘wsgiref’ shows up in the ‘pip freeze’ above even though it wasn’t installed as a separate package. It seems that even though ‘wsgiref’ is part of the Python standard library, ‘setuptools’ still thinks it is a distinct versioned package when using Python 2.7. As much as it is a bit confusing, its presence in the output of ‘pip freeze’ can be ignored. You may want to ensure you don’t list ‘wsgiref’ in a ‘requirements.txt' file though as you might then accidentally install it from PyPi, which could be an older version of the source code than what the Python standard library contains.

Building Python from source

So although we are only bundling up the one application in the Docker container, if using a Python version which is installed from an operating system package repository, or an associated package repository, a Python virtual environment is very much recommended to avoid problems.

What now for the case where the Python version has been installed from source code?

Well because we are compiling from source code and would be installing into a location distinct from the main system Python, there is no possibility for additional Python packages to get installed. The ‘site-packages’ directory would therefore be empty.

In order to allow further packages to be installed, we would at least install ‘pip’, but as for a Python virtual environment only ‘pip’ and what it requires such as ‘setuptools’ would be installed. Any further Python modules would also be installed using ‘pip’ and so there is no conflict with any Python modules installed via system packaging tools.

This means that strictly speaking if installing Python from source code in a Docker container, we could skip the use of a Python virtual environment. The only issue, depending on what user was used to install this Python version, would be whether file system permissions need to be fixed up to allow an arbitrary user to install subsequent Python modules during any build phase for the application in creating a derived image.

Setting up a virtual environment

What we therefore have is that if you are using a version of Python which comes as part of the operating system packages, or which is installed from a companion repository such as SCL for CentOS, you should really always ensure that you use a Python virtual environment. Do not attempt to install Python modules using ‘pip’ into the ‘site-packages’ directory for the main Python installation and avoid using Python packages from the operating system package repository.

If you have compiled Python from source code as part of your Docker image, there is no strict need to use a Python virtual environment. Using tools like ‘pip’ to install Python modules direct into the ’site-packages’ directory of the separate Python installation should be fine.

The question now is what is the best way to set up any Python virtual environment when it is used. What file system permissions should be used to allow both ‘root’ and non ‘root’ users to install packages. How should the Python virtual environment be activated or enabled so that it is always used in the Docker container.

All these issues I will discuss in a followup to this blog post.

Monday, January 4, 2016

Roundup of Docker issues when hosting IPython.

Over the last two weeks I have posted a total of six blog posts in a series about what I encountered when attempting to run IPython on Docker. Getting it to run wasn’t straight forward because I wanted to run it in a hosting service which doesn’t permit you to run your application as ‘root' inside of the container.

The reason for not being able to run the application as ‘root’ was because Docker at this time still does not support Linux user namespaces as a main line feature. As such, a hosting service is unable to set up Docker to remap users so that, although you would be running as ‘root’ inside of the container, you would be running as an unprivileged user on the Docker host. For a hosting service, allowing users to run as ‘root’, even if notionally restricted to the context of a container, is too big a risk where you are allowing users to run unknown and untrusted code. 

User namespace support is destined for Docker, but is only an experimental feature at this time. Even so, the initial features being added only allow user ID mapping to be performed at the daemon level. That is, it will not be possible to map user IDs differently for each container.

The inability to map user IDs at the container level will, if the hosting service provides support for persistent data volumes, likely mean that a hosting service using Docker will still not be able to make use of the feature to relax the current restrictions on running as the ‘root’ user. This is because in a multi tenant environment where you have unrelated user’s applications running, you ideally want each user to have a unique user ID range outside of the Docker container. Trying to manage a unique user ID range for each distinct user with only daemon level user ID mapping may well be impractical.

As for Docker being able to map user IDs differently for each container, this is dependent on changes being made in the Linux kernel. As far as I know there is still no timeframe for when these changes will be ready. The sometimes slow adoption of new kernel versions by Linux variants means that even when the kernel has been updated, it may be some time after that that any new kernel version is generally available.

Although you still may be thinking that since user namespaces will eventually solve all the problems we can ignore the problem of needing to make containers run as a non privileged user, this will only be the case for now if you are only going to run your Docker image on your own infrastructure.

If you intend to run your Docker image on a hosting service, or make it available on Docker Hub registry for others to use, you really should consider updating your Docker image so it doesn’t require being run as the ‘root’ user. By doing so you will be making it usable on the most number of platforms for running Docker images. The need for this is unlikely to change soon.

So right now this is the problem I faced with trying to use IPython. The Docker image for ‘jupyter/notebook’ is designed to be run as ‘root’ and as a result is unusable on a Docker platform which prohibits running as ‘root’, such as is the case for OpenShift.

Summary of issues encountered

Lets now do a roundup of the posts and the different issues that needed to be addressed in trying to make the IPython image run as an unprivileged user.

This got everything started. We had no issues with getting ‘ipython/nbviewer’, a static viewer for IPython notebooks, running on OpenShift. In looking at the most obvious candidate image for running a live IPython notebook, that is ‘ipython/ipython’, we found it was actually deprecated and that when we dug into the ‘Dockerfile’ we found it points you towards using ‘jupyter/notebook’. The information available on the Docker Hub Registry for the images is well overdue for an update as coming in via that path it wasn’t at all clear that it shouldn’t be used and what to use instead. Even when using ‘jupyter/notebook’, we found that it fails to run as a non privileged user due to file system permission issues.

To understand why one should run as a non privileged user in the first place, or why a hosting service may enforce it, we looked next at the dangers of running as the ‘root’ user inside of a Docker container. It was demonstrated how it was quite easy to gain root privileges in the Docker host were an untrusted user allowed to mount arbitrary volumes from the Docker host into a container. Although a hosting service may not expose the Docker API directly, via the ‘docker’ client or otherwise, and hide it behind another tool or user interface, it is still probably wiser not to allow users to run as ‘root’, especially when in nearly all cases it isn’t necessary.

Overriding of the user that a Docker container runs as can be done from a ‘Dockerfile’, or when ‘docker run’ is used to actually start the container. The latter will even override what may be specified in the ‘Dockerfile’. A hosting service may well always override the user the Docker container runs as due to the fact that where the user is specified in the ‘Dockerfile’, if it isn’t specified as an integer user ID, then what that user is cannot be trusted. Where persistent volumes are offered by a hosting service, it may well want to enforce a specific user ID be used due to the current lack of an ability to map user IDs.

Having the user ID be overridden presented further problems even where the Docker image had been setup to run as a specific non ‘root’ user. This was because the associated group for the user and the corresponding file system permissions that the Docker image was set up for, didn’t allow the user specified when overridden, to write to parts of the file system such as the home directory of the user. It was therefore necessary to override the ‘HOME’ directory in the ‘Dockerfile’ and be quite specific in how the default user account and its corresponding group were setup.

Although we could fix up things so they would run as a random user ID, a remaining problem was that the user ID didn’t have an actual entry in the system password database. This meant that attempts to look up details for the user would fail. This could cause some applications to give unexpected results or cause a web application to fail. It was necessary to use a user level library, preloaded into programs, for overriding what details were returned when programs looked up user details.

Not an issue with running on a hosting service which prohibited running as ‘root’, but a further issue which affects the IPython notebook server is that it will fail to start up IPython kernel processes when it is run as process ID ‘1’. To work around this issue it was necessary to use a minimal ‘init’ process as process ID ‘1’, which would in turn start the IPython notebook server, reaping any zombie processes and passing on signals received to the IPython notebook server process.

Testing your own Python images

Although you yourself may not make use of a hosting service which prohibits the running of Docker containers as ‘root’, you can still test this scenario and whether your Docker images will work when forced to run as an unprivileged user.

To do this all you need to do is override the user the Docker container is run as when you invoke the ‘docker run’ command. Specifically, add the ‘-u’ option to ‘docker run’, giving it a high user ID which doesn’t have a corresponding user ID in the system user database of the Linux installation within the Docker image.

docker run --rm -u 10000 -p 8080:8080 my-docker-image

If your application doesn’t run due to file system access permissions, or because it fails in looking for details of a non existent user at some point, you will need to make changes to your Docker image for it to work in this scenario.

In addition to checking whether your Docker image can run as a non privileged user, you should also validate whether the application can stand in properly as the ‘init’ process that would normally run as process ID ‘1’. That is, whether it will reap zombie process properly.

In a prior blog post I showed a Python WSGI application which could be used to test a Python web server. I did it as a Python WSGI application to show that it can happen within the context of an actual Python web application. There is though actually a simpler way which can be used even if you are not using Python.

To test whether whatever process is running inside of the Docker container as process ID ‘1’ is reaping zombie processes properly, all you need to do is use ‘docker exec’ to gain access to the running Docker container and run ‘sleep’ as a background process in a sub shell.

$ docker exec -it admiring_lalande bash
root@cde42f97d683:/app# (sleep 10&)

If after waiting the 10 seconds for the ‘sleep’ to finish, you find a zombie process as child to process ID ‘1’, then whatever you are running as process ID ‘1’ is not reaping child processes correctly.

Docker container top wsgiref sleep

These therefore are two simple tests you can do to make sure your own Docker images can run as a non privileged user and that you will not have issues due to zombie processes. Have issues and you perhaps should look through the prior blog posts as to what changes you need to look at making.

Tuesday, December 29, 2015

Issues with running as PID 1 in a Docker container.

We are getting close to the end of this initial series of posts on getting IPython to work with Docker and OpenShift. In the last post we finally got everything working in plain Docker when a random user ID was used and consequently also under OpenShift.

Although we covered various issues and had to make changes to the existing ‘Dockerfile’ used with the ‘jupyter/notebook’ image to get it all working correctly, there was one issue that the Docker image for ‘jupyter/notebook’ had already addressed which needs a bit of explanation. This related to the existing ‘ENTRYPOINT’ statement used in the ‘Dockerfile’ for ‘jupyter/notebook’.

ENTRYPOINT ["tini", "--"]
CMD ["jupyter", "notebook"]

Specifically, the ‘Dockerfile’ was wrapping the running of the ‘jupyter notebook’ command with the ‘tini’ command.

Orphaned child processes

For a broader discussion on the problem that the use of ‘tini’ is trying to solve you can read the post ‘Docker and the PID 1 zombie reaping problem’.

In short though, process ID 1, which is normally the UNIX ‘init’ process, has a special role in the operating system. That is that when the parent of a process exits prior to its child processes, and the child processes therefore become orphans, those orphaned child processes have their parent process remapped to be process ID 1. When those orphaned processes then finally exit and their exit status is available, it is the job of the process with process ID of 1, to acknowledge the exit of the child processes so that their process state can be correctly cleaned up and removed from the system kernel process table.

If this cleanup of orphaned processes does not occur, then the system kernel process table will over time fill up with entries corresponding to the orphaned processes which have exited. Any processes which persist in the system kernel process table in this way are what are called zombie processes. They will remain there so long as no process performs the equivalent of a system ‘waitpid()’ call on that specific process to retrieve its exit status and so acknowledge that the process has terminated.

Process ID 1 under Docker

Now you may be thinking, what does this have to do with Docker, after all, aren’t processes running in a Docker container just ordinary processes in the operating system, but simply walled off from the rest of the operating system.

This is true, and if you were to run a Docker container which executed a simple single process Python web server, if you look at the process tree on the Docker host using ‘top’ you will see:

Docker host top wsgiref idle

Process ID ‘26196’ here actually corresponds to the process created from the command that we used as the ‘CMD’ in the ‘Dockerfile’ for the Docker image.

Our process isn’t therefore running as process ID 1, so why is the way that orphaned processes are handled even an issue?

The reason is that if we were to instead look at what processes are running inside of our container, we can only see those which are actually started within the context of the container.

Further, rather than those processes using the same process ID as they are really running as when viewed from outside of the container, the process IDs have been remapped. In particular, processes created inside of the container, when viewed from within the container, have process IDs starting at 1.

Docker container top wsgiref idle

Thus the very first process created due to the execution of what is given by ‘CMD’ will be identified as having process ID 1. This process is still though the same as identified by process ID ‘26196’ when viewed from the Docker host.

More importantly, what you cannot see from with inside of the container is what was the original process with the process ID of ‘1’ outside of the container. That is, you cannot see the system wide ‘init’ process.

Logically it isn’t therefore possible to reparent an orphaned process created within the container to a process not even visible inside of the container. As such, orphaned processes are reparented to the process with process ID of ‘1’ within the container. The obligation of reaping the resulting zombie processes therefore falls to this process and not the system wide ‘init’ process.

Testing for process reaping

In order to delve more into this issue and in particular its relevance to when running a Python web server, as a next step lets create a simple Python WSGI application which can be used to trigger orphan processes. Initially we will use the WSGI server implemented by the ‘wsgiref’ module in the Python standard library, but we can also run it up with other WSGI servers to see how they behave as well.

from __future__ import print_function
import os
def orphan():
print('orphan: %d' % os.getpid())
os._exit(0)
def child():
print('child: %d' % os.getpid())
newpid = os.fork()
pids = (os.getpid(), newpid)
if newpid == 0:
orphan()
else:
pids = (os.getpid(), newpid)
print("child: %d, orphan: %d" % pids)
os._exit(0)
def parent():
newpid = os.fork()
if newpid == 0:
child()
else:
pids = (os.getpid(), newpid)
print("parent: %d, child: %d" % pids)
os.waitpid(newpid, 0)

def application(environ, start_response):
status = '200 OK'
output = b'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
    start_response(status, response_headers)
    parent()
    return [output]

from wsgiref.simple_server import make_server
httpd = make_server('', 8000, application)
httpd.serve_forever()

The way the test runs is that each time a web request is received, the web application process will fork twice. The web application process itself will be made to wait on the exit of the child process it created. That child process though will not wait on the further child process it had created, thus creating an orphaned process as a result.

Building this test application into a Docker image, with no ‘ENTRYPOINT’ defined and only a ‘CMD’ which runs the Python test file application, when we hit it with half a dozen requests, what we then see from inside of the Docker container is:

Docker container top wsgiref multi

For a WSGI server implemented using the ‘wsgiref’ module from the Python standard library, this indicates that no reaping of the zombie process is occurring. Specifically, you can see how our web application process running as process ID ‘1’ now has various child processes associated with it where the status of each process is ‘Z’ indicating it is a zombie process waiting to be reaped. Even if we wait some time, these zombie processes never go away.

If we look at the processes from the Docker host we see the same thing.

Docker host top wsgiref multi

This therefore confirms what was described, which is that the orphaned processes will be reparented against what is process ID ‘1’ within the container, rather than what is process ID ‘1’ outside of the container.

One thing that is hopefully obvious is that a WSGI server based off the ‘wsgiref’ module sample server in the Python standard library doesn’t do the right thing, and running it as the initial process in a Docker container would not be recommended.

Behaviour of WSGI servers

If a WSGI server based on the ‘wsgiref’ module sample server isn’t okay, what about other WSGI servers. Also, what about ASYNC web servers for Python such as Tornado.

The outcome from running the test WSGI application on the most commonly used WSGI servers, and also equivalent tests specifically for the Tornado ASYNC web server, Django and Flask builtin servers, yields the following results.

  • django (runserver) - FAIL
  • flask (builtin) - FAIL
  • gunicorn - PASS
  • Apache/mod_wsgi - PASS
  • tornado (async) - FAIL
  • tornado (wsgi) - FAIL
  • uWSGI - FAIL
  • uWSGI (master) - PASS
  • waitress - FAIL
  • wsgiref - FAIL

The general result here is that any Python web server that runs as a single process would usually not do what is required of a process running as process ID ‘1’. This is because they aren’t in any way designed to manage child processes. As a result, there isn’t even the chance that they may look for exiting zombie processes and reap them.

Of note though, uWSGI when used with its default options, although it can run in a multi process configuration has a process management model with is arguably broken. The philosophy with uWSGI though is seemingly to never correct what it gets wrong, but to instead add an option which enables the correct behaviour. Thus users have to opt into the correct or better behaviour. For the case of uWSGI, the more robust process management model is only enabled by using the ‘--master’ option. If using uWSGI you should always use that option, regardless of whether you are running it in Docker or not.

Both uWSGI in master mode and mod_wsgi, although they pass and will reap zombie processes when run as process ID ‘1’, work in a way that can be surprising.

The issue with uWSGI in master mode and mod_wsgi, is that each only look for exiting child processes on a periodic basis. That is, they will wake up about once a second and then look for any child processes that have exited, collecting their exit status and so for zombie processes cause them to be reaped.

This means that during the one second interval, some number of zombie processes still could accumulate, the number depending on request throughput and how often a specific request does something that would trigger the creation of a zombie process. The number of zombie processes will therefore build up and then be brought back to zero each second.

Although this occurs for uWSGI in master mode and mod_wsgi, it shouldn’t in general cause an issue as no other significant code runs in the parent or master process which is managing all the child processes. Thus the presence of the zombie process as a child for a period will not cause any confusion. Further, zombie processes should still be reaped at an adequate rate, so temporary increases shouldn’t matter.

Problems which can arise

As to what problems can actually arise due to this issue, there are a few at least.

The first is that if the process running as process ID ‘1’ does not reap zombie processes, then they will accumulate over time. If the container is for a long running service, then eventually the available slots in the system kernel process table could be used up. If this were to occur, the system as a whole would be unable to create any new processes.

How this plays out in practice within a Docker container I am not sure. If it were the case that the upper bound of the number of such zombie processes that could be created within a Docker container were bounded by the system kernel process table size, then technically the creation of zombie processes could be used as an attack vector against the Docker host. I sort of expect therefore that Docker containers likely have some lower limit on the number of process that can be created within the container, although things get complicated if a specific user has multiple containers. Hopefully someone can clarify this specific point for me.

The second issue is that the reparenting of processes against the application process running as process ID ‘1’ could confuse any process management mechanism running within that process. This could cause issues in a couple of ways.

For example, if the application process were using the ‘wait()’ system call to wait for any child process exiting, but the reported process ID wasn’t one that it was expecting and it didn’t handle that gracefully, it could cause the application process to fail in some way. Especially in the case where the ‘wait()’ call indicated that an exiting zombie process had a non zero status, it may cause the application process to think its directly managed child processes were having problems and failing in some way. Alternatively, if the orphaned processes weren't themselves exiting straight away, and the now parent process operated in some way by monitoring the set of child processes it had, then this itself could be confusing the parent process.

Finally getting back to the IPython example we have been working with, it has been found that when running the ‘jupyter notebook’ application as process ID ‘1’, it fails to start up properly kernel processes for running of individual notebook instances. The logged messages in this case are:

[I 10:19:33.566 NotebookApp] Kernel started: 1ac58cd9-c717-44ef-b0bd-80a377177918
[I 10:19:36.566 NotebookApp] KernelRestarter: restarting kernel (1/5)
[I 10:19:39.573 NotebookApp] KernelRestarter: restarting kernel (2/5)
[I 10:19:42.582 NotebookApp] KernelRestarter: restarting kernel (3/5)
[W 10:19:43.578 NotebookApp] Timeout waiting for kernel_info reply from 1ac58cd9-c717-44ef-b0bd-80a377177918
[I 10:19:45.589 NotebookApp] KernelRestarter: restarting kernel (4/5)
WARNING:root:kernel 1ac58cd9-c717-44ef-b0bd-80a377177918 restarted
[W 10:19:48.596 NotebookApp] KernelRestarter: restart failed
[W 10:19:48.597 NotebookApp] Kernel 1ac58cd9-c717-44ef-b0bd-80a377177918 died, removing from map.
ERROR:root:kernel 1ac58cd9-c717-44ef-b0bd-80a377177918 restarted failed!
[W 10:19:48.610 NotebookApp] Kernel deleted before session

I have been unable to find that anyone has been able to work out the specific cause, but I suspect it is falling foul of the second issue above. That is, the exit statuses from those orphaned processes are confusing the code managing the startup of the kernel processes, making it think the kernel processes are in fact failing, causing it to attempt to restart them repeatedly.

Whatever the specific reason, not running the ‘jupyter notebook’ as process ID ‘1’ avoids the problem, so it does at least appear to be related to the orphaned processes being reparented against the main ‘jupyter notebook’ process.

Now although for IPython it seems to relate to the second issue whereby process management mechanisms are failing, as shown above, even generic Python WSGI servers or web servers don’t necessarily do the right thing either. So even though they might not have process management issues, since they don’t perform any such management of processes for implementing a multi process configuration for the server itself, the accumulation of zombie process could still eventually cause the maximum number of allowed processes to be exceeded.

Shell as parent process

Ultimately the solution is not to run any application process not designed to also perform reaping of child processes as process ID ‘1’ inside of the container.

There are two ways to avoid this. The first is a quick hack and one which is often seen used in Docker containers, although perhaps not intentionally. Although it avoids the zombie reaping problem, it causes its own issues.

The second way is to run as process ID ‘1’ a minimal process whose only role is to execute as a child process the real application process and then subsequently reap the zombie processes.

This minimal init process of the second approach has one other important role as well though and it is this role where the quick hack solution fails.

As to the quick or inadvertent hack that some rely on, lets look at how a ‘CMD’ in a ‘Dockerfile’ is specified.

The recommended way of using ‘CMD’ in a ‘Dockerfile’ would be to write:

CMD [ "python", "server_wsgiref.py" ]

This is what was used above where we saw within the Docker container.

Docker container top wsgiref idle

As has already been explained, this results in our application running as process ID ‘1’.

Another way of using ‘CMD’ in a ‘Dockerfile’ is to write:

CMD python server_wsgiref.py

Our application still runs, but this isn’t doing the same thing as when we supplied a list of arguments to ‘CMD’.

The result in this case is:

Docker container top wsgiref shell

With this way of specifying the ‘CMD’ our application is no longer running as process ID ‘1’. Instead process ID ‘1’ is occupied by an instance of ‘/bin/sh’.

This has occurred because supplying the plain command line to ‘CMD’ actually results in the equivalent of:

CMD [ "sh", "-c", "python server_wsgiref.py" ]

Thus the reason for a shell process being introduced into the process hierarchy as process ID ‘1’.

With our application now no longer running as process ID ‘1’, the responsibility of reaping zombie processes falls instead to the instance of ‘/bin/sh’ running as process ID ‘1’.

As it turns out, ‘/bin/sh’ will reap any child processes associated with it, so we do not have the problem of zombie processes accumulating.

Now this isn’t the only way you might end up with an instance of ‘/bin/sh’ being process ID ‘1’.

Another common scenario where this ends up occurring is where someone using Docker uses a shell script with the ‘CMD’ statement so that they can do special setup prior to actually running their application. You thus can often find something like:

CMD [ "/app/start.sh" ]

The contents of the ’start.sh’ script might then be:

#!/bin/sh
python server_wsgiref.py

Using this approach, what we end up with is:

Docker container top wsgiref entrypoint

Our script is listed as process ID ‘1’, although it is in reality still an instance of ‘/bin/sh’.

The reason our application didn’t end up as process ID ‘1’ in this case is that the final line of the script simply said ‘python server_wsgiref.py’.

Whenever using a shell script as a ‘CMD’ like this, you should always ensure that when running your actual application from the shell script, that you do so using ‘exec’. That is:

#!/bin/sh
exec python server_wsgiref.py

By using ‘exec’ you ensure that your application process takes over and replaces the script process, thus resulting in it running as process ID ‘1’.

But wait, if having process ID ‘1’ be an instance of ‘/bin/sh’, with our application being a child process of it solves the zombie reaping problem, why not always do that then.

The reason for this is that although ‘/bin/sh’ will reap zombie processes for us, it will not propagate signals properly.

For our example, what this is means is that with ‘/bin/sh’ as process ID ‘1’, if we were using the command ‘docker stop’, the application process will not actually shutdown. Instead the default timeout for ‘docker stop’ will expire and it will then do the equivalent of ‘docker kill’ which will force kill the application and the container.

This occurs because although the instance of ‘/bin/sh’ will receive the signal to terminate the application which is sent by ‘docker stop', it ignores it and doesn’t pass it on to the actual application.

This in turn means that your application is denied the ability to be notified properly that the container is being shutdown and so ensure that it performs any required finalisation of in progress operations. For some applications, this lack of an ability to perform a clean shutdown could leave any persistent data in an inconsistent state, causing problems when the application is restarted.

It is therefore important that signals always be received by the main application process in a Docker container, but an intermediary shell process will not ensure that.

One can attempt to catch signals in the shell script and forward them on, but this does get a bit tricky as you also have to ensure that you wait for the wrapped application process to shutdown properly when it is passed a signal that would cause it to exit. As I have previously shown in an earlier post for other reasons, you might be able to use in such circumstances the shell script:

#!/bin/sh
trap 'kill -TERM $PID' TERM INT
python server_wsgiref.py &
PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?
exit $STATUS

To be frank though, rather than hoping this will work reliably, you are better off using a purpose built monitoring process for this particular task.

Minimal init process

Coming from the Python world, one solution that Python developers like to use for managing processes is ‘supervisord’. This should work, but is a relatively heavy weight solution. At this time, ‘supervisord’ is also still only usable with Python 2. If you were wanting to run an application using Python 3, this means you wouldn’t be able to use it, unless you were okay with having to also add Python 2 to your image, resulting in a much fatter Docker image.

The folks at Phusion in that blog post I referenced earlier do provide a minimal ‘init’ like process which is implemented as a Python script, but if not using Python at all in your image, that means pulling in Python 2 once again when you perhaps don’t want that.

Because of the overheads of bringing in additional packages where you don’t necessarily want them, my preferred solution for a minimal ‘init’ process for handling reaping of zombies and the propagation of signals to the managed process is the ‘tini’ program. This is the same program that the ‘jupyter/notebook’ also makes use of and we saw mentioned in the ‘ENTRYPOINT’ statement of the ‘Dockerfile’.

ENTRYPOINT ["tini", "--"]

All ’tini' does is spawn your application and wait for it to exit, all the while reaping zombies and performing signal forwarding. In other words, it is specifically built for this task, relieving you of worrying about whether your own application is going to do the correct thing in relation to reaping of zombie processes.

Even if you believe your application may handle this task okay, I would still recommend that a tool like ‘tini’ be used as it gives you one less thing to worry about.

If you are using a shell script with ‘CMD’ in a ‘Dockerfile’ and subsequently running your application from it, you can still do that, but remember to use ‘exec’ when running your application to ensure that signals will get to your application. Don’t use ‘exec’ and your shell script will still swallow them up.

IPython and cloud services

We are finally done with improving on how IPython can be run with Docker so that it will work with cloud services using Docker. The main issue here we faced was the additional security restrictions that can be in place in cloud services for running Docker images in such a service.

In short, running Docker images as ‘root’ is a bad idea. Even if you are running your own Docker service it is something you should avoid if at all possible. Because of the increased risk you can understand why a hosting service is not going to allow you to do it.

With the introduction of user namespace support in Docker the restriction on what user a Docker image can run as should hopefully be able to be relaxed, but in the interim you would be wise to design Docker images so that they can run as an unprivileged user.

Now since there was actually a few things we needed to change to achieve this and a description of the changes were spread over multiple blog posts, I will summarise the changes in the next post. I will also start to outline what else I believe could be done to make the use of IPython with Docker, and especially cloud services, even better.

Thursday, December 24, 2015

Unknown user when running Docker container.

In the last post we covered how to setup a Docker image to cope with the prospect of a random user ID being used when the Docker container was started. The discussion so far has though only dealt with the issue of ensuring file system access permissions were set correctly to allow the original default user, as well as the random user ID being used, to update files.

A remaining issue of concern was the fact that when a random user ID is used which doesn’t correspond to an actual user account, that UNIX tools such as ‘whoami’ will not return valid results.

I have no name!@5a72c002aefb:/notebooks$ whoami
whoami: cannot find name for user ID 10000

Up to this point this didn’t actually appear to prevent our IPython Notebook application working, but it does leave the prospect that subtle problems could arise when we start actually using IPython to do more serious work.

Lets dig in and see what this failure equates to in the context of a Python application.

Accessing user information

If we are writing Python code, there are a couple of ways using the Python standard library that we could determine the login name for the current user.

The first way is to use the ‘getuser()’ function found in the ‘getpass’ module.

import getpass
name = getpass.getuser()

If we use this from an IPython notebook when a random user ID has been assigned to the Docker container, like how ‘whoami’ fails, this will also fail.

---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-3a0a5fbe1d4e> in <module>()
1 import getpass
----> 2 name = getpass.getuser()
/usr/lib/python2.7/getpass.pyc in getuser()
156 # If this fails, the exception will "explain" why
157 import pwd
--> 158 return pwd.getpwuid(os.getuid())[0]
159
160 # Bind the name getpass to the appropriate function
KeyError: 'getpwuid(): uid not found: 10000'

The error details and traceback displayed here actually indicate the second way of getting access to the login name. In fact the ‘getuser()’ function is just a high level wrapper around a lower level function for accessing user information from the system user database.

We could therefore also have written:

import pwd, os
name = pwd.getpwuid(os.getuid())[0]

Or being more verbose to make it more obvious what is going on:

import pwd, os
name = pwd.getpwuid(os.getuid()).pw_name

Either way, this is still going to fail where the current user ID doesn’t match a valid user in the system user database.

Environment variable overrides

You may be thinking, why bother with the ‘getuser()’ function if one could use ‘pwd.getpwuid()’ directly. Well it turns out that ‘getuser()’ does a bit more than just act as a proxy for calling ‘pwd.getpwuid()’. What it actually does is first consult various environment variables which identify the login name for the current user.

def getuser():
"""Get the username from the environment or password database.
    First try various environment variables, then the password
database. This works on Windows as long as USERNAME is set.
    """
    import os
    for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
user = os.environ.get(name)
if user:
return user
    # If this fails, the exception will "explain" why
import pwd
return pwd.getpwuid(os.getuid())[0]

These environment variables such as ‘LOGNAME’ and ‘USER’ would normally be set by the login shell for a user. When using Docker though, a login shell isn’t used and so they are not set.

For the ‘getuser()’ function at least, we can therefore get it working by ensuring that as part of the Docker image build, we set one or more of these environment variables. Typically both the ‘LOGNAME’ and ‘USER’ environment variables are set, so lets do that.

ENV LOGNAME=ipython
ENV USER=ipython 

Rebuilding our Docker image with this addition to the ‘Dockerfile’ and trying ‘getuser()’ again from within a IPython Notebook and it does indeed now work.

Overriding user system wide

This change may help allow more code to execute without problems, but if code directly accesses the system user database using ‘pwd.getpwuid()’, if it doesn’t catch the ‘KeyError’ exception and handle missing user information you will still have problems.

So although this is still a worthwhile change in its own right, just in case something may want to consult ‘LOGNAME’ and ‘USER’ environment variables which would normally be set by the login shell, such as ‘getuser()’, it does not help with ‘pwd.getpwuid()’ nor UNIX tools such as ‘whoami’.

To be able to implement a solution for this wider use case gets a bit more tricky as we need to solve the issue for UNIX tools, or for that matter, any C level application code which uses the ‘getpwuid()’ function in the system C libraries.

The only way one can achieve this though is through substituting the system C libraries, or at least overriding the behaviour of key C library functions. This may sound impossible but by using a Linux capability to forcibly preload a shared library into executing processes it is actually possible and someone has even written a package we can use for this purpose.

The nss_wrapper library

The package in question is one called ‘nss_wrapper'. The library provides a wrapper for the user, group and hosts NSS API. Using nss_wrapper it is possible to define your own ‘passwd' and ‘group' files which will then be consulted when needing to lookup user information.

One way in which this package is normally used is when doing testing and you need to run applications using a dynamic set of users and you don’t want to have to create real user accounts for them. This mirrors the situation we have where when using a random user ID we will not actually have a real user account.

The idea behind the library is that prior to starting up your application you would make copies of the system user and group database files and then edit any existing entries or add additional users as necessary. When starting your application you would then force it to preload a shared library which overrides the NSS API functions in the standard system libraries such that they consult the copies of the user and group database files.

The general steps therefore are something like:

ipython@3d0c5ea773a3:/tmp$ whoami
ipython
ipython@3d0c5ea773a3:/tmp$ id
uid=1001(ipython) gid=0(root) groups=0(root)
ipython@3d0c5ea773a3:/tmp$ echo "magic:x:1001:0:magic gecos:/home/ipython:/bin/bash" > passwd
ipython@3d0c5ea773a3:/tmp$ LD_PRELOAD=/usr/local/lib64/libnss_wrapper.so NSS_WRAPPER_PASSWD=passwd NSS_WRAPPER_GROUP=/etc/group id
uid=1001(magic) gid=0(root) groups=0(root)
ipython@3d0c5ea773a3:/tmp$ LD_PRELOAD=/usr/local/lib64/libnss_wrapper.so NSS_WRAPPER_PASSWD=passwd NSS_WRAPPER_GROUP=/etc/group whoami
magic

To integrate the use of the ‘nss_wrapper’ package we need to do two things. The first is install the package and the second is to add a Docker entrypoint script which can generate a modified password database file and then ensure that the ‘libnss_wrapper.so’ shared library is forcibly preloaded for all processes subsequently run.

Installing the nss_wrapper library

At this point in time the ‘nss_wrapper’ library is not available in the stable Debian package repository, still only being available in the testing repository. As we do not want in general to be pulling packages from the Debian testing repository, we are going to have to install the ’nss_wrapper’ library from source code ourselves.

To be able to do this, we need to ensure that the system packages for ‘make’ and ‘cmake’ are available. We therefore need to add these to the list of system packages being installed.

# Python binary and source dependencies
RUN apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
build-essential \
ca-certificates \
cmake \
curl \
git \
make \
language-pack-en \
libcurl4-openssl-dev \
libffi-dev \
libsqlite3-dev \
libzmq3-dev \
pandoc \
python \
python3 \
python-dev \
python3-dev \
sqlite3 \
texlive-fonts-recommended \
texlive-latex-base \
texlive-latex-extra \
zlib1g-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

We can then later on download the source package for ‘nss_wrapper’ and install it.

# Install nss_wrapper.
RUN curl -SL -o nss_wrapper.tar.gz https://ftp.samba.org/pub/cwrap/nss_wrapper-1.1.2.tar.gz && \
mkdir nss_wrapper && \
tar -xC nss_wrapper --strip-components=1 -f nss_wrapper.tar.gz && \
rm nss_wrapper.tar.gz && \
mkdir nss_wrapper/obj && \
(cd nss_wrapper/obj && \
cmake -DCMAKE_INSTALL_PREFIX=/usr/local -DLIB_SUFFIX=64 .. && \
make && \
make install) && \
rm -rf nss_wrapper

Updating the Docker entrypoint

At present the Docker ‘ENTRYPOINT’ and ‘CMD’ are specified in the ‘Dockerfile’ as:

ENTRYPOINT [“tini”, “--"]
CMD ["jupyter", "notebook"]

The ‘CMD’ statement in this case is the actual command we want to run to start the Jupyter Notebook application.

We haven’t said anything about what the ‘tini’ program specified by the ‘ENTRYPOINT' is all about as yet, but it is actually quite important. If you do not use ‘tini’ as a wrapper for IPython Notebook then it will not work properly. We will cover what ‘tini’ is and why it is necessary for running IPython Notebook in a subsequent post.

Now because we do require ‘tini’, but we now also want to do some other work prior to actually running the ‘jupyter notebook’ command, we are going to substitute an entrypoint script in place of ‘tini’. We will call this ‘entrypoint.sh’, make it executable, and place it in the top level directory of the repository. After its copied into place, the ‘ENTRYPOINT’ specified in the ‘Dockerfile’ will then need to be:

ENTRYPOINT ["/usr/src/jupyter-notebook/entrypoint.sh"]

The actual ‘entrypoint.sh’ we will specify as:

#!/bin/sh
# Override user ID lookup to cope with being randomly assigned IDs using
# the -u option to 'docker run'.
USER_ID=$(id -u)
if [ x"$USER_ID" != x"0" -a x"$USER_ID" != x"1001" ]; then
NSS_WRAPPER_PASSWD=/tmp/passwd.nss_wrapper
NSS_WRAPPER_GROUP=/etc/group
    cat /etc/passwd | sed -e ’s/^ipython:/builder:/' > $NSS_WRAPPER_PASSWD
    echo "ipython:x:$USER_ID:0:IPython,,,:/home/ipython:/bin/bash" >> $NSS_WRAPPER_PASSWD
    export NSS_WRAPPER_PASSWD
export NSS_WRAPPER_GROUP
    LD_PRELOAD=/usr/local/lib64/libnss_wrapper.so
export LD_PRELOAD
fi
exec tini -- "$@"

Note that we still execute ‘tini’ as the last step. We do this using ‘exec’ so that its process will replace the entrypoint script and take over as process ID 1, ensuring that signals get propagated properly, as well as to ensure some details related to process management are handled correctly. We will also pass on all command line arguments given to the entrypoint script to ‘tini’. The double quotes around the arguments reference ensure that argument quoting is handled properly when passing through arguments.

What is now new compared to what was being done before is the enabling of the ‘nss_wrapper’ library. We do not do this though when we are running as ‘root’, were that is that the Docker image was still forced to run as ‘root’ even though the aim is that it run as a non ‘root’ user. We also do not need to do it when we are run with the default user ID.

When run as a random user ID we do two things with the password database file that we will use with ‘nss_wrapper’.

The first is that we change the login name corresponding to the existing user ID of ‘1001’. This is the default ‘ipython’ user account we created previously. We do this by simply replacing the ‘ipython’ login name in the password file when we copy it, with the name ‘builder’ instead.

The second is that we add a new password database file entry corresponding to the current user ID, that being whatever is the random user ID allocated to run the Docker container. In this case we use the login name of ‘ipython’.

The reason for swapping the login names so the current user ID uses ‘ipython’ rather than the original user ID of ‘1001’, is so that the application when run will still think it is the ‘ipython’ user. What we therefore end up with in our copy of the password database file is:

docker run -it --rm -u 10000 -p 8888:8888 jupyter-notebook bash
ipython@0ff73693d433:/notebooks$ tail -2 /tmp/passwd.nss_wrapper
builder:x:1001:0:IPython,,,:/home/ipython:/bin/bash
ipython:x:10000:0:IPython,,,:/home/ipython:/bin/bash

Immediately you can already see that the shell prompt now looks correct. Going back and running our checks from before, we now see:

ipython@0ff73693d433:/notebooks$ whoami
ipython
ipython@0ff73693d433:/notebooks$ id
uid=10000(ipython) gid=0(root) groups=0(root)
ipython@0ff73693d433:/notebooks$ env | grep HOME
HOME=/home/ipython
ipython@0ff73693d433:/notebooks$ touch $HOME/magic
ipython@0ff73693d433:/notebooks$ touch /notebooks/magic
ipython@0ff73693d433:/notebooks$ ls -las $HOME
total 24
4 drwxrwxr-x 4 builder root 4096 Dec 24 10:22 .
4 drwxr-xr-x 6 root root 4096 Dec 24 10:22 ..
4 -rw-rw-r-- 1 builder root 220 Dec 24 10:08 .bash_logout
4 -rw-rw-r-- 1 builder root 3637 Dec 24 10:08 .bashrc
4 drwxrwxr-x 2 builder root 4096 Dec 24 10:08 .jupyter
0 -rw-r--r-- 1 ipython root 0 Dec 24 10:22 magic
4 -rw-rw-r-- 1 builder root 675 Dec 24 10:08 .profile

So even though the random user ID didn’t have an entry in the original system password database file, by using ‘nss_wrapper’ we can trick any applications to use our modified password database file for user information. This means we can dynamically generate a valid password database file entry for the random user ID which was used.

With the way we swapped the login name for the default user ID of ‘1001’, with the random user ID, as far as any application is concerned it is still running as the ‘ipython’ user.

So we can distinguish, any files that were created during the image build as the original ‘ipython’ user will now instead show as being owned by ‘builder’, which if we look it up maps to user ID of ‘1001’.

ipython@0ff73693d433:/notebooks$ id builder
uid=1001(builder) gid=0(root) groups=0(root)
ipython@0ff73693d433:/notebooks$ getent passwd builder
builder:x:1001:0:IPython,,,:/home/ipython:/bin/bash

Running as another name user

Not that there strictly should be a reason for doing so, but it is possible to also force the Docker container to run as some other user ID with an entry in the password database file, but because they have their own distinct primary group assignments, you do have to override the group to be ‘0’ so that it can update any required directories.

$ docker run -it --rm -u 5 -p 8888:8888 jupyter-notebook bash
games@36ec17b1d9c1:/notebooks$ whoami
games
games@36ec17b1d9c1:/notebooks$ id
uid=5(games) gid=60(games) groups=60(games)
games@36ec17b1d9c1:/notebooks$ env | grep HOME
HOME=/home/ipython
games@36ec17b1d9c1:/notebooks$ touch $HOME/magic
touch: cannot touch ‘/home/ipython/magic’: Permission denied
games@36ec17b1d9c1:/notebooks$ touch /notebooks/magic
touch: cannot touch ‘/notebooks/magic’: Permission denied

$ docker run -it --rm -u 5:0 -p 8888:8888 jupyter-notebook bash
games@e2ecabedab47:/notebooks$ whoami
games
games@e2ecabedab47:/notebooks$ id
uid=5(games) gid=0(root) groups=60(games)
games@e2ecabedab47:/notebooks$ env | grep HOME
HOME=/home/ipython
games@e2ecabedab47:/notebooks$ touch $HOME/magic
games@e2ecabedab47:/notebooks$ touch /notebooks/magic
games@e2ecabedab47:/notebooks$ ls -las $HOME
total 24
4 drwxrwxr-x 4 builder root 4096 Dec 24 10:41 .
4 drwxr-xr-x 6 root root 4096 Dec 24 10:41 ..
4 -rw-rw-r-- 1 builder root 220 Dec 24 10:39 .bash_logout
4 -rw-rw-r-- 1 builder root 3637 Dec 24 10:39 .bashrc
4 drwxrwxr-x 2 builder root 4096 Dec 24 10:39 .jupyter
0 -rw-r--r-- 1 games root 0 Dec 24 10:41 magic
4 -rw-rw-r-- 1 builder root 675 Dec 24 10:39 .profile

Running as process ID 1

Finally if we startup the IPython Notebook application localy with Docker, or on OpenShift, then everything still works okay. Further, as well as the ‘getpass.getuser()’ function working, use of ‘pwd.getpwuid(os.getuid())’ also works, this being due to the use of the ‘nss_wrapper’ library.

So everything is now good and we shouldn’t have any issues. There was though something already present in the way that the ‘jupiter/notebook’ Docker image was set up that is worth looking at. This was the use of the ‘tini’ program as the ‘ENTRYPOINT’ in the ‘Dockerfile’. This relates to problems that can arise when running an application as process ID 1. I will look at what this is all about in the next post.