When I was working on mod_wsgi, but also in a previous job where I was working on web application performance monitoring tools, I was always after good sample Python web applications to test with. Unlike other programming languages for the web there weren’t many end user applications written in Python that you could quickly download and get running. Most of what existed out there were incomplete framework extensions which you still had to customise to get running for your own personal needs. Even if they did provide a way of starting up a skeletal application to at least see what they did, the steps to get them running were often quite complex.
One of the problems with deploying Python web applications you download are that they are often set up to be run in a very specific way with a particular hosting service or WSGI server. This fact meant you could end up spending quite a bit of time fiddling with it to get it all running in the environment you have. This made the exercise of trying to use a Python web application for testing quite frustrating at times. I can quite easily imagine that for users who might be trying to evaluate an Open Source framework extension to see if they could use it, that such difficulties in getting it running could be quite a turn off.
Where I am currently working at Red Hat as part of the OpenShift evangelist team, we are currently running a hackathon (ends 21st September) where the theme is health related applications. I have already seen that there are some Python developers out there participating in the hackathon, so I thought I might do a bit of a search around to see what I could find in the way of existing web applications or framework extensions out there for the Python programming language and test out deploying them using my warpdrive package. I actually wasn’t really expecting to find anything too interesting, but was pleasantly surprised.
One framework extension I found which I thought was quite interesting was called Opal. The Opal framework makes use of Django, along with toolkits for developing the front end such as Angular JS and Bootstrap. It was created by Open Health Care UK. The point of this blog post, as well as highlighting what looks like a quite interesting package out there that you can use, is to see how my warpdrive package stacks up when trying to deploy an arbitrary Python web application off the Internet.
Getting Opal running locally
First up is getting Opal running locally. For this Opal provides some good documentation and also a starter script to get you going. Installation and creation of an initial application I could test with was as simple as running:
pip install opal
opal startproject mynewapp
Once this was done to start up the starter application you run:
cd mynewapp
python manage.py runserver
From a browser you could then visit 'http://localhost:8000' and even login to the admin interface using a pre created user account. The latter was possible as Opal has added hooks which are automatically triggered when ‘runserver’ is used, which will set up the database and create a super user account. They have therefore optimised things for the local developer experience when using the builtin Django development server.
What now though if you wanted to deploy Opal to a production environment? They do provide a ‘Procfile’ for Heroku, but don’t provide anything which really helps out if you want to deploy to another WSGI server such as Apache/mod_wsgi or uWSGI, in a container using a local Docker service, or other PaaS environments such as OpenShift.
It is making deployment of Python web applications easy across such different environments that my warpdrive project is targeting, so lets now look at using warpdrive to do this.
Preparing the project for warpdrive
With warpdrive already installed, the first thing we want to do is activate a new project using it. In the ‘mynewapp’ directory we run ‘warpdrive project opal’.
$ warpdrive project opal
Initializing warpdrive project 'opal'.
(warpdrive+opal) $
What this command will do is create us a new Python virtual environment just for this application and activate it. This will be an empty Python virtual environment, so next we need to install all the Python packages that the project requires.
When we originally create the project using the ‘opal startproject’ command this conveniently created for us a ‘requirements.txt’ file. This can be used with ‘pip’ to install all the packages, but we aren’t actually going to do that. This is because warpdrive also knows about ‘requirements.txt’ files and we can use it to install the required packages.
Rather than run ‘pip’ directly, we are therefore going to run ‘warpdrive build’ instead. This will not only ensure that any required Python packages are installed, but also ensures that any other framework specific build steps are also run. The output from running ‘warpdrive build’ starts out with:
-----> Installing dependencies with pip (requirements.txt)
Collecting cryptography==1.3.2 (from -r requirements.txt (line 2))
Using cached cryptography-1.3.2-cp27-none-macosx_10_6_intel.whl
Collecting Django==1.8.3 (from -r requirements.txt (line 3))
Using cached Django-1.8.3-py2.py3-none-any.whl
...
Obtaining opal from git+git://github.com/openhealthcare/opal.git@master#egg=opal (from -r requirements.txt (line 18))
Cloning git://github.com/openhealthcare/opal.git (to master) to /tmp/warpdrive-build.12067/opal
...Installing collected packages: pycparser, cffi, pyasn1, six, idna, ipaddress, enum34, cryptography, Django, coverage, dj-database-url, gunicorn, psycopg2, static3, dj-static, django-reversion, django-axes, ffs, MarkupSafe, jinja2, letter, requests, djangorestframework, django-appconf, django-compressor, meld3, supervisor, python-dateutil, pytz, billiard, anyjson, amqp, kombu, celery, django-celery, opal
Running setup.py develop for opal
Successfully installed Django-1.8.3 MarkupSafe-0.23 amqp-1.4.9 anyjson-0.3.3 billiard-3.3.0.23 celery-3.1.19 cffi-1.7.0 coverage-3.6 cryptography-1.3.2 dj-database-url-0.2.1 dj-static-0.0.6 django-appconf-1.0.2 django-axes-1.4.0 django-celery-3.1.17 django-compressor-1.5 django-reversion-1.8.7 djangorestframework-3.2.2 enum34-1.1.6 ffs-0.0.8.1 gunicorn-0.17.4 idna-2.1 ipaddress-1.0.16 jinja2-2.8 kombu-3.0.35 letter-0.4.1 meld3-1.0.2 opal psycopg2-2.5 pyasn1-0.1.9 pycparser-2.14 python-dateutil-2.4.2 pytz-2016.6.1 requests-2.7.0 six-1.10.0 static3-0.7.0 supervisor-3.0
Collecting mod_wsgi
Installing collected packages: mod-wsgi
Successfully installed mod-wsgi-4.5.3
One thing of note here is that ‘pip’ when run is actually trying to install Opal direct from the Git repository on GitHub. This is because the ‘requirements.txt’ file generated by ‘opal startproject’ contains:
-e git://github.com/openhealthcare/opal.git@master#egg=opal
As far as deploying to a production environment, pulling package code direct from a Git repository, and especially from head of the master branch isn’t necessarily the best idea. We instead want to ensure that we are always using a known specific version of the package which we have tested with. To remedy this, this time we will run ‘pip’ directly, but only to uninstall the version of ‘opal’ installed so it will not cause a problems when trying to reinstall it from PyPi.
pip uninstall opal
We now edit the ‘requirements.txt’ file and replace that line with:
opal==0.7.0
Worth highlighting is that this isn’t being done specially because of warpdrive. It is simply good practice to be using pinned versions of packages in a production environment so you know what you are getting. I can only imagine the ‘requirements.txt’ file is generated in this way as it makes the Opal developers job easy when testing themselves when they are working on it.
Having fixed that, we can rerun ‘warpdrive build’ and it will trigger ‘pip’ once more to ensure we have the packages we need installed, and since we removed the ‘opal’ package, it will now install the version we actually want.
Beyond installing any required Python packages, one other thing that warpdrive will do is that it will realise that the Django web framework is being used and will automatically trigger the Django ‘collectstatic’ command to collate together any static files used by the application. The next thing after package installation we therefore see in the output of ‘warpdrive build’ is:
-----> Collecting static files for Django
...
OSError: [Errno 2] No such file or directory: '/Users/graham/Projects/openshift3-opal/mynewapp/mynewapp/static'
Unfortunately this fails though. The reason it fails is actually because the Django settings module for the generated Opal project contains:
# Additional locations of static files
STATICFILES_DIRS = (
os.path.join(PROJECT_PATH, 'static'),
)
With this setting, when ‘collectstatic’ is run, it expects that directory to actually exist and if it doesn’t it will fail.
This is easily fixed by creating the directory:
mkdir mynewapp/static
The directory should though have been created automatically by the ‘opal startproject’ command. That it doesn’t has already been addressed for version 0.7.1 of Opal.
After fixing this, ‘warpdrive build’ then completes successfully. It may have looked a bit messy, but that was only because we had to correct the two things related to the project that ‘opal startproject’ created for us. We could simply have left it using the ‘opal’ project from the Git repository, but felt it better to clarify what is best practice in this case.
Starting up the application
When we first started up the application we used Django’s builtin development server. This server should not though be used in a production system. Instead of the development server you should use a production grade WSGI server such as Apache/mod_wsgi, gunicorn, uWSGI or Waitress. For most use cases any of these WSGI servers will be suitable, but depending on your specific requirements you may find one more appropriate.
Setting up a project for one WSGI server even can still be a challenge in itself for many people. Trying to set up a project for more than one WSGI server so you can compare them only replicates the pain. Usually people will totally muck up the configuration of one or the other and get a totally incorrect impression of which one may actually be better.
In addition to aiming to simplify the build process, another aim of warpdrive is therefore to make it much easier to run up your WSGI application, no matter what WSGI server you want to use. This means you can get started much more quickly, but also give you the flexibility to swap between different WSGI servers.
That said, having prepared our application for warpdrive, to actually run it up and have it start accepting web requests, all we now need to do is run ‘warpdrive start’.
-----> Configuring for deployment mode: of 'auto'
-----> Default WSGI server type is 'mod_wsgi'
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)-----> Running server script start-mod_wsgi
-----> Executing server command 'mod_wsgi-express start-server --log-to-terminal --startup-log --port 8080 --application-type module --entry-point mynewapp.wsgi --callable-object application --url-alias /assets/ /Users/graham/Projects/openshift3-opal/mynewapp/mynewapp/assets/'
Server URL : http://localhost:8080/
Server Root : /tmp/mod_wsgi-localhost:8080:502
Server Conf : /tmp/mod_wsgi-localhost:8080:502/httpd.conf
Error Log File : /dev/stderr (warn)
Startup Log File : /dev/stderr
Request Capacity : 5 (1 process * 5 threads)
Request Timeout : 60 (seconds)
Queue Backlog : 100 (connections)
Queue Timeout : 45 (seconds)
Server Capacity : 20 (event/worker), 20 (prefork)
Server Backlog : 500 (connections)
Locale Setting : en_AU.UTF-8
[Mon Aug 01 14:20:13.092722 2016] [mpm_prefork:notice] [pid 13220] AH00163: Apache/2.4.18 (Unix) mod_wsgi/4.5.3 Python/2.7.10 configured -- resuming normal operations
[Mon Aug 01 14:20:13.092995 2016] [core:notice] [pid 13220] AH00094: Command line: 'httpd (mod_wsgi-express) -f /tmp/mod_wsgi-localhost:8080:502/httpd.conf -E /dev/stderr -D FOREGROUND'
And that is it. Our Opal application is now running.
You may be thinking at this point that using ‘runserver’ is just as easy, so what is the point, but if you look closely at the output of ‘warpdrive start’, you will see that the Django development server is not being used. Instead Apache/mod_wsgi is being used. That is, a production grade WSGI server. Not only that, you didn’t have to configure anything, all the set up and running of Apache and mod_wsgi was done for you.
Using a different WSGI server such as uWSGI is not any harder. In the case of uWSGI, all you would need to do is create the file ‘.warpdrive/server_type’ and place in it ‘uwsgi’, to override the default of using Apache/mod_wsgi. Then run ‘warpdrive build’ and once again ‘warpdrive start’ and you will instead be running with uWSGI.
-----> Configuring for deployment mode: of 'auto'
-----> Default WSGI server type is 'uwsgi'
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)-----> Running server script start-uwsgi
-----> Executing server command 'uwsgi --master --http-socket :8080 --enable-threads --threads=5 --thunder-lock --single-interpreter --die-on-term --module mynewapp.wsgi --callable application --static-map /assets/=/Users/graham/Projects/openshift3-opal/mynewapp/mynewapp/assets/'
[uwsgi-static] added mapping for /assets/ => /Users/graham/Projects/openshift3-opal/mynewapp/mynewapp/assets/
*** Starting uWSGI 2.0.13.1 (64bit) on [Mon Aug 1 14:27:02 2016] ***
...
In all cases, no matter which WSGI server you are using, warpdrive will worry about ensuring the minimum sane set of options are provided to the WSGI server as well as any required for the specific WSGI application. In this case warpdrive even handled the task of making sure the WSGI server knew how to host the static files the application needs.
Initialising an application database
Our Opal application is again running and we can access it via the browser from 'http://localhost:8080/'. Do so though and we encounter a new problem though.
Exception Type: OperationalError
Exception Value: no such table: axes_accessattempt
This gets back to that magic that was being done when ‘runserver’ was being used. Specifically, the ‘runserver’ command had been set up to also automatically ensure that the database being used was initialised and that an initial account created.
Doing that for a development system is fine, but you would have to be careful about automating that in a production system. For starters, although in a development system you can use a file based database such as SQLite, in production you are more likely going to be using a database such as MySQL or PostgreSQL. These will be handling your real data and so you have to be much more careful in what you do with those databases.
When using Django, which Opal is based on, it provides two management commands for initialising a database and creating accounts. These are ‘migrate’ and ‘createsuperuser’. The ‘migrate’ command actually serves two purposes. It can be used to initialise the initial database, but also perform database migrations when the database model used by the application code changes.
These are slightly magic steps which you need to know about how Django works to know how to run. When they were automatically triggered by ‘runserver’ you didn’t have to know how to run them as that knowledge was coded into the scripts triggered by ‘runserver’.
As codifying such steps is beneficial from the stand point of ensuring that such steps are captured and always done the same way, warpdrive provides a mechanism called action hooks for recording what these steps are. You can then get warpdrive to run them for you and you don’t have to know the details. You can embed in the action as much magic as you need to, including steps like ensuring that your database is actually running before attempting anything, or allowing details of accounts to create to be supplied through environment variables or configuration files.
As an example, lets create our first action hook. This we will save away in the file ‘.warpdrive/action_hooks/setup’.
#!/bin/bashecho " -----> Running Django database migration"python manage.py migrateif [ x"$DJANGO_ADMIN_USERNAME" != x"" ]; then
echo " -----> Creating predefined Django super user"
(cat - | python manage.py shell) << !
from django.contrib.auth.models import User;
User.objects.create_superuser('$DJANGO_ADMIN_USERNAME',
'$DJANGO_ADMIN_EMAIL',
'$DJANGO_ADMIN_PASSWORD')
!
else
if (tty > /dev/null 2>&1); then
echo " -----> Running Django super user creation"
python manage.py createsuperuser
fi
fi
This captures the steps we need to initialise the database and create an initial account. For the account creation, we can either supply the details via environment variables, or if we are running in an interactive shell, it will prompt us. Having created this action hook we can now run ‘warpdrive setup’.
-----> Running .warpdrive/action_hooks/setup
-----> Running Django database migration
Operations to perform:
Synchronize unmigrated apps: search, staticfiles, axes, messages, compressor, rest_framework
Apply all migrations: sessions, admin, opal, sites, auth, reversion, contenttypes, mynewapp
Synchronizing apps without migrations:
Creating tables...
Creating table axes_accessattempt
Creating table axes_accesslog
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
...
Applying sites.0001_initial... OK
-----> Running Django super user creation
Username (leave blank to use 'graham'):
Email address: graham@example.com
Password:
Password (again):
Superuser created successfully.
Running ‘warpdrive start’ once again we find the application is now all working fine and we can log in with the account we created.
The contents of the ‘setup’ script is typical here of what is required for database initialisation when using Django. The other set of actions we want to capture for Django is what needs to be done when migrating the database after database model changes. These we can capture in the file ‘.warpdrive/action_hooks/migrate’.
#!/bin/bashecho " -----> Running Django database migration"python manage.py migrate
Why it is better to capture these commands as action hooks and have warpdrive execute them for you, is that the commands are now a part of your application code. You don’t need to go look up some documentation to remember what the steps are. All you need to remember is the commands ‘warpdrive setup’ and ‘warpdrive migrate’.
Another important reason is that if there are any special environment variables that need to be set to replicate the actual environment when your web application is run, warpdrive will also worry about setting those as well. This means you wouldn’t need to remember to set some special value for the ‘DJANGO_SETTINGS_MODULE' environment variable in order to run the Django management commands directly. The warpdrive command will know what is required and set it up for you based on what you have captured about that in your application code.
Moving to a production environment
Using ‘warpdrive’ in our local environment has allowed us to more easily use a production grade WSGI server during development. Using the same WSGI server as we will use in production means we are more likely to pick up problems which will not show up using the development server.
The action hooks feature of warpdrive has also resulted us in capturing those important steps we need to run to initialise any database and later perform database migrations when we make changes to our database model.
That is a good start, but what now if we want to run our Opal application in a production environment?
The first example of how we might want to do that is to use Docker. For that though we first need too create a Docker image which contains our application along with any WSGI server and configuration needed to startup the application.
This step is where people often waste quite a lot of time. Developers can’t resist new toys to play with and so they feel it is imperative that they learn everything about this new Docker tool, throwing away any wisdom they may have accumulated about best practices over time and start from scratch, building up their own special Docker image piece by piece.
More often than not this results in a poorly constructed Docker image that doesn’t follow best practices and which can well be insecure, running as root and requiring it be run in a way that could easily lead to being able to break into your wider systems if someone can compromise your web application.
With warpdrive there is a much better way of moving to Docker. That is to have warpdrive build the Docker image for you. You don’t need to know anything about creating Docker images as warpdrive will build up the image ensuring that best practices are being used.
To package our Opal application up into a Docker image, all we need to do is run the ‘warpdrive image’ command.
(warpdrive+opal) $ warpdrive image opal
I0801 15:35:29.321041 14900 install.go:251] Using "assemble" installed from "image:///opt/app-root/s2i/bin/assemble"
I0801 15:35:29.321223 14900 install.go:251] Using "run" installed from "image:///opt/app-root/s2i/bin/run"
I0801 15:35:29.321280 14900 install.go:251] Using "save-artifacts" installed from "image:///opt/app-root/s2i/bin/save-artifacts"
---> Installing application source
---> Building application from source
-----> Installing dependencies with pip (requirements.txt)
Collecting cryptography==1.3.2 (from -r requirements.txt (line 2))
Downloading cryptography-1.3.2.tar.gz (383kB)
Collecting Django==1.8.3 (from -r requirements.txt (line 3))
Downloading Django-1.8.3-py2.py3-none-any.whl (6.2MB)
...
-----> Collecting static files for Django
78 static files copied to '/opt/app-root/src/mynewapp/assets', 465 unmodified.
---> Fix permissions on application source
This should look familiar to you as in building the Docker image it is using the same ‘warpdrive build’ command that you used in your local environment. This is being done within a Docker base image which has already been set up with Python and warpdrive.
By using the same tooling, in the form of warpdrive, in your local environment as well as in constructing the Docker image, you have a better guarantee that things are being set up in the same way and will also run in the same way. This removes the disparity that usually exists between working in a local environment and what you have in your production environment.
The final result of that ‘warpdrive image’ command is that you now have a Docker image named ‘opal’ which we can be run using ‘docker run’.
(warpdrive+opal) $ docker run --rm -p 8080:8080 opal
---> Executing the start up script
-----> Configuring for deployment mode: of 'auto'
-----> Default WSGI server type is 'mod_wsgi'
Python 2.7.12 (default, Jul 29 2016, 00:52:26)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)-----> Running server script start-mod_wsgi
-----> Executing server command 'mod_wsgi-express start-server --log-to-terminal --startup-log --port 8080 --application-type module --entry-point mynewapp.wsgi --callable-object application --url-alias /assets/ /opt/app-root/src/mynewapp/assets/'
Server URL : http://localhost:8080/
Server Root : /tmp/mod_wsgi-localhost:8080:1001
Server Conf : /tmp/mod_wsgi-localhost:8080:1001/httpd.conf
Error Log File : /dev/stderr (warn)
Startup Log File : /dev/stderr
Request Capacity : 5 (1 process * 5 threads)
Request Timeout : 60 (seconds)
Queue Backlog : 100 (connections)
Queue Timeout : 45 (seconds)
Server Capacity : 20 (event/worker), 20 (prefork)
Server Backlog : 500 (connections)
Locale Setting : en_US.UTF-8
[Mon Aug 01 05:49:45.774485 2016] [mpm_event:notice] [pid 20:tid 140425572333312] AH00489: Apache/2.4.23 (Unix) mod_wsgi/4.5.3 Python/2.7.12 configured -- resuming normal operations
[Mon Aug 01 05:49:45.774622 2016] [core:notice] [pid 20:tid 140425572333312] AH00094: Command line: 'httpd (mod_wsgi-express) -f /tmp/mod_wsgi-localhost:8080:1001/httpd.conf -E /dev/stderr -D MOD_WSGI_MPM_ENABLE_EVENT_MODULE -D MOD_WSGI_MPM_EXISTS_EVENT_MODULE -D MOD_WSGI_MPM_EXISTS_WORKER_MODULE -D MOD_WSGI_MPM_EXISTS_PREFORK_MODULE -D FOREGROUND'
Like with how ‘warpdrive build’ was used in constructing the Docker image, the ‘warpdrive start’ command is also used in the final container when run.
We are still only using the file system database SQLite, which will not survive the life of the container, at this point, and we also need to initialise that database, but we can again use the ‘warpdrive setup’ command.
$ docker exec -it berserk_galileo warpdrive setup
-----> Running .warpdrive/action_hooks/setup
-----> Running Django database migration
Operations to perform:
Synchronize unmigrated apps: compressor, staticfiles, search, messages, rest_framework, axes
Apply all migrations: sessions, contenttypes, admin, mynewapp, sites, reversion, auth, opal
Synchronizing apps without migrations:
Creating tables...
Creating table axes_accessattempt
Creating table axes_accesslog
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
...
-----> Running Django super user creation
Username (leave blank to use 'default'): graham
Email address: graham@example.com
Password:
Password (again):
Superuser created successfully.
This is where the benefit of having captured all those steps to initialise the database in an action hook comes into play. You only need to know the one command and not all the individual commands.
Making an application production ready
As you can see using warpdrive can certainly help to simplify getting a Python web application running and getting it into production using a container runtime such as Docker. Part of the benefits of using warpdrive are that it handles the WSGI server for you, but also features like action hooks, which help to ensure you capture important key steps around how to setup your application.
There is still more to getting an application production ready than just this though. Especially when using containers, because the running container is ephemeral and so any local data is lost when the container stops, it is important to use an external database or persistent storage. Work also needs to be done around how you configure your application as well as log information from it.
In followup posts I will start to delve into these issues and how warpdrive can help you configure your application for the target environment. I will also go into detail about how warpdrive can be used with PaaS offerings such as OpenShift.
Looks like the next Opal version will also switch away from the generated requirements.txt file installing opal package from the Git repository according to https://github.com/openhealthcare/opal/pull/804/files
ReplyDeleteHi Graham,
ReplyDeleteThanks for your post, and for the things it nudged us to fix and think about !
Deployment in settings other than the ones we routinely use at Open Health Care isn't something we've spent a lot of time with thus far in Opal's development. We often deploy to heroku, ec2 and some private infrastructure owned by individual hospitals with a mixture of ansible and fabric automation.
The hooks for e.g. running migrations or creating users are there to be nice for development - they certainly aren't intended to be used in production deployments!
Thanks to this post, I've been taking a look at warpdrive and docker - unfortunately I'm struggling to get the Docker image to build (I think I may have run into this issue upstream with s2i: https://github.com/openshift/source-to-image/issues/475 ) but we'll take another look and see if we can get through those and hopefully get one of our real world applications dockerised and running on something like OpenShift.
In the meantime do let us know if you have any other feedback or thoughts on the project.
Brief point of order - those extra hooks on top of Django's standard procedure, like creating users and running migrations are actually run at the 'opal startproject' stage rather than being triggered by runserver.
My mistake over where database setup was done then. I thought I had removed the SQLite database and it had all come back. I may have run my warpdrive stuff in between though and it had recreated it.
ReplyDeleteAs to S2I issue, I thought I had fixed the place in my S2I wrappers for warpdrive where that problem came up. So curious as to exactly what you are trying to do and with what. Might be easier to try and drop me an email direct about it or use the mod_wsgi mailing list as place for discussion.
Okay, I probably haven't done anything to address that s2i issue. Was getting it confused with another issue which I had done something for. Has been a busy couple of weeks because of travelling so had forgotten about the /bin/env issue.
ReplyDeleteShould have added the solution, which is to use s2i version 1.0.9 for now. My other choices are to hope for a proper fix of some sort, or create a symlink from /bin/env to /usr/bin/env in my Debian based images.
ReplyDeleteI have updated my Docker images with the /bin/env workaround now and so the 'warpdrive image' command should work fine.
ReplyDeleteHi Dumpleton, sorry to contact you from here.
ReplyDeleteI have a django app deployed in AWS EB using autoscaling. This app uses Django Rest with Token Authentication. In order for this to work, I have to add the following lines in etc/httpd/conf.d/wsgi.conf file:
RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]
WSGIPassAuthorization On
The problem is: when AWS do an autoscale or ElasticBeanstalk environment upgrade, the wsgi.conf file is updated and the custom settings are deleted.
How can I avoid that?
Thanks in advance
@Ronaldo If you need help, the mod_wsgi documentation explains how to get it. See http://modwsgi.readthedocs.io/en/develop/finding-help.html
ReplyDelete