This post is heavily inspired by a presentation I saw at the last Paris.py meetup.

The speaker (Nicolas Mussat, CTO at MeilleursAgent.com), was kind enough to allow me to publish this on my blog.

You can find the original slides (in French) on slideshare.

The problem

Let’s say you are building a flask application powered by uwsgi and you want to deploy it to your server.

Here, I’m going to assume you are not using a service like heroku or any kind of Platform-as-a-Service.

There are plenty of cases when using a PaaS is a good idea, but let’s assume that for various reasons you’d like to use your own server.

The really easy way (too easy :P)

You could be tempted to use something like:

# in requirements.txt

flask
path.py
# on the server:
$ cd my-app
$ git pull
$ pip install -r requirements.txt
$ systemctl restart uwsgi

This has several problems:

  • What about when flask gets updated? You may end up with a version that is no longer compatible with the rest of your source code.

  • The same issue may occur if one of the dependencies of flask gets updated in a non-backward compatible way.

  • It’s complicated to rollback to a previous version.

  • Bad things will happen if github or pypi is down.

  • How do you make sure all tests have been run and that they pass?

There has to be a better way! 1

Separate build and deployment

A good idea would be to separate building and testing the application and its deployment. 2

If you think about it, you can separate the files used by your code in two parts:

  • The runtime, that is the Python interpreter itself, the uwsgi service, and so on ..

  • The application, which is all the code source and all its dependencies. (Which could be written in Python or be compiled C extensions).

If you build your package on the same linux distribution as your server, then you can assume that all the files of the application you build on one machine will run on the other.

In the worst case scenario, you’ll build a postgresql C extension, which will crash if the postgresql libraries are not installed, but this is easy to fix and should not happen too often. 3

Of course, you could imagine using some container ala docker, but this goes beyond the scope of this article. 4

So, to build the application package, we have to:

  • Handle the versions of all the dependencies, recursively
  • Bundle all the dependencies with the rest of the code.

Building the package

Handle dependencies

This can be done with three different requirements.txt files:

  • requirements-dev.txt: Used by developers and continuous integration servers to run the tests, the linters, build the documentation and what not.

  • requirements-app.txt: the dependencies of the application itself

  • requirements.txt: the list of the “frozen” dependencies, generated by something like:

$ virtualenv ~/my-app
$ source ~/my-app/bin/activate
$ pip install -r requirements-app.txt
$ pip freeze > requirements.txt

By the way, if you find this really close to what Ruby does with the Gemfile and the Gemfile.lock files, and you think pip should better support this kind of workflow, you are not alone.

If you are concerned about security here, you could:

Using a target directory for pip

The first trick is to tell pip to install the dependencies not in a virtualenv or in your system, but rather in a special vendors directory:

$ pip install --target vendors -r requirements.txt
.
├── app.py
├── requirements.txt
└── vendors
    ├── click
    ├── click-6.6.dist-info
    ├── flask
    ├── Flask-0.11.1.dist-info
    ├── itsdangerous-0.24.dist-info
    ├── itsdangerous.py
    ├── jinja2
    ├── Jinja2-2.8.dist-info
    ├── markupsafe
    ├── MarkupSafe-0.23.dist-info
    ├── werkzeug
    └── Werkzeug-0.11.11.dist-info

Configuring the interpreter

The second trick is to use the site module and call addsitedir:

import site
site.addsitedir('vendors')

from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return '\o/'

You could get away with setting PYTHONPATH or using sys.path.insert(0, 'vendors'), but using addsitedir will make sure Python sees the .pth files, which may be required by some modules to work.

Using setup.py to build the archive

To build the package, no need to re-invent the wheel, you can use setup.py directly:

# setup.py

from setuptools import setup

setup(
  name="myapp",
  version="0.1",
  py_modules=["app"],
  ...
)
# MANIFEST.in

include myapp.conf
...
graft vendors

Here we make sure the configuration file for the application and the whole vendors directory are included in the final package. 5

With this in place, you can now build the package like this:

$ python setup.py sdist
$ ls dist/
myapp-0.1.tar.gz

Deployment pipeline

Now that we know how to generate the package, here’s an example on how we can implement continuous delivery:

  • Build

    • A developer pushes a tag on the git repository
    • A Jenkins job is triggered, and the tests are run
    • If they pass, a package is made and uploaded to a local storage
  • Deployment

    • The latest package is fetched from the storage
    • The whole archive gets uploaded to the server
    • The uwsgi service is restarted on the server

The deployment phase can be managed by a tool like fabric or ansible, but that’s an exercise left to the reader :)

An other solution

I’ve also stumbled upon an other article on the same topic: How We Deploy Python Code, by @nylas.

You might prefer this one.

Personally, I’m still using the “really easy” method for my pet projects, so, as always, try and experiment for yourself before trusting a random stranger on the Internet :)


  1. Famous quote from Raymond Hettinger, I suggest you got watch his many Python talks. [return]
  2. For more about this, feel free to read Release Management Done Right, by Alex Papdimoulis [return]
  3. Using a tool like ansible might help. [return]
  4. IMHO, using docker for simple Python applications is overkill anyway, and may not work in production as well as you’d think [return]
  5. graft is one of the many available commands in a MANIFEST.in file. [return]