Why would you want to run Django inside a Docker locally? Don’t you have enough moving parts needed to run things already?
I try to answer this question here. See if that applies to your use case. This post is about how to do it.
At the end of this post you will have:
- Set up Docker locally on your dev box.
- Run Django in a Docker container on the same dev box.
- Put a breakpoint and debugged the code!
- Docker installed locally. For the purposes of this proof-of-concept, I used Docker Desktop.
Minimal Docker Setup
Our minimal docker setup will:
- run a relational database: Postgres
- directly run
runservercommand, which is what should be done for debugging purposes
Our minimal docker setup will not:
- run a webserver, such as Nginx
uwsgias “glue” between the framework (Django code) and the webserver
Since the objective is local development with Docker neither are needed.
Minimal Docker Understanding
If some Docker concepts are still not clear to you, do not worry. I have to search for new stuff myself all the time.
When I started, I found this article really helpful: 6 Docker Basics You Should Completely Grasp When Getting Started. This article explains the relationship and key differences between:
- Port Forwarding
- Docker compose
A lifesaver if you’re confused with the barrage of new Docker jargon. It was to me. In this post we’ll be setting up:
- one Docker compose file, or
Set up the Django project locally
Dockerfile in your Django project root with these contents:
1 2 3 4 5 6 7 FROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/
Let’s decompose this
Line 1 picks an image:
FROM python:3 instructs Docker to start with the
python:3 image. It’s common to see the “alpine” version for Python images. Alpine Linux is much smaller than most distribution base images, and leads to slimmer images in general. You can read more about Python images and image variants about this here.
Line 2 sets environment variable
1. What is this? Normally, if you have a process piping data into your application, the terminal may buffer the data. The terminal keeps data in a reservoir until a size limit or a certain character (generally a newline or EOF) is reached. At that point it dumps the entire chunk of data into your application all at once. Same for output data and error data (
stderr). This option asks the terminal not to use buffering. More detail on this option here.
The remaining set of instructions in lines 3-7:
- creates a
/codedirectory at root level
- installs python packages (no virtualenv needed in the container to begin with)
- copies the full project directory into it
- selects a base image for us,
- configures it for us to run things on top of it by installing required packages and copying over our Django project code.
Q: So how do we run this container?
A: We’ll use
Q: But the container above runs only Django. Don’t we need a container for Postgres?
A: We do not need to configure a Postgres container as Docker provides a docker image for Postgres which we can just start up. We then log into it and configure it as if it were running locally.
Q: Shall we write a shell script and execute a docker process on our local machine for both containers?
A: No. Docker provides
docker-compose rather than relying on shell scripts.
The big advantage of a
docker-compose.yml file is that it’s very readable.
docker-compose.yml file in your Django project directory:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 version: '1' services: db: image: postgres environment: - POSTGRES_DB=djangodb - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 container_name: django_web environment: - DATABASE_URL volumes: - .:/code ports: - "8000:8000" depends_on: - db
We have two “services”,
db service runs the Postgres process inside a container which uses the
POSTGRES_PASSWORD are hardcoded in the
docker-compose.yml. You can however configure them to use environment variables using a
.envrc file. This IMHO is not worth the effort if you’re doing this only for local testing.
web service runs the
manage.py runserver process inside the aptly named
build: . tells Docker compose to use the
Dockerfile located in this same directory to run the
web service. It will run the service “within” the
django_web container. Docs on
command runs Django’s
runserver command and exposes it at the container’s port
container_name is a custom name you can append for clarity’s sake. We’ll see its effect when we run things in the next section.
environment allows you to reuse environment variables from the host machine. More info on managing environment variables for your Django project. In this case
DATABASE_URL environment variable is being reused.
volumes is used to “mount” host paths. The basic usage in our context is to “share” the code on our machine with that on the
django_web service container. Docs here.
- Lines 18-19 map the host machine’s port
8000with the container’s port
- Lines 20-21 enforce the
webcontainer dependency on the
Enough explanation! Let’s run things!
Make it run on your local Docker container
Make sure your Docker Desktop is running.
docker ps, to list containers. Assuming you haven’t started any other containers, you shouldn’t see any. Your output should be the below:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Run the below command to start the two containers as per your
The terminal output should end with the usual output of the Django
web_1 | Watching for file changes with StatReloader web_1 | Performing system checks... web_1 | web_1 | System check identified no issues (0 silenced). web_1 | June 06, 2020 - 10:24:43 web_1 | Django version 3.0.6, using settings 'djangotest.settings' web_1 | Starting development server at http://0.0.0.0:8000/ web_1 | Quit the server with CONTROL-C.
docker ps now should show two containers (scroll to the right to see full output):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 91f53455ce25 django_web "python manage.py ru…" 59 seconds ago Up 58 seconds 0.0.0.0:8000->8000/tcp django_web 45b3091c0c62 postgres "docker-entrypoint.s…" 59 seconds ago Up 58 seconds 5432/tcp djangotest_db_1
Note that the output of the
NAMES above depends on the name of the project’s directory. For example, since the
db service container does not have a name, the resulting comtainer name is
djangtest_db1 because the project directory is
In another terminal window/tab, log onto the
db docker container:
docker-compose exec db sh
Once logged in, open
psql as user
su - postgres -c psql
And create the database:
CREATE DATABASE djangodb OWNER postgres;
psql and the
db docker container.
Enter the web container:
docker-compose exec web sh
Once logged in run the below to apply database migrations and create a superuser:
./manage.py migrate ./manage.py createsuperuser
Refresh the site at
http://localhost:8000/admin/ and log in with the superuser you just created.
You can exit the
Hit the command to stop the process as you would with your local Django
After doing so, running
docker ps should should only show the
db service running.
Update the code and put a breakpoint in one of your views. I used the IPython debugger ipdb.
After changing the code, run the below to rebuild the
web container, including dependencies:
docker-compose up -d --no-deps --build web
Let’s decompose the above
docker-compose up command:
--detachmeans “Detached mode”; to run containers in the background
docker-compose upto not start linked services
docker-compose upto build any required images before starting containers
webis the service for which I’m running
To debug using
docker-compose run, docs here.
--service-ports web makes the service
web expose the necessary ports to be able debug:
docker-compose run --service-ports web
If you run
docker ps you should see that your
web service is back up and running.
Access the URL where that would stop execution at your breakpoint. Since I’ve put my breakpoint at the home page, I see the terminal output below:
System check identified no issues (0 silenced). June 06, 2020 - 10:51:15 Django version 3.0.6, using settings 'djangotest.settings' Starting development server at http://0.0.0.0:8000/ Quit the server with CONTROL-C. > /code/items/views.py(15)get_context_data() 13 context = super().get_context_data(**kwargs) 14 import ipdb; ipdb.set_trace() ---> 15 return context ipdb> self.request <WSGIRequest: GET '/'>
/code/items/views.py is the location of the module on the container, not on your local dev box.
Which means… that’s it, you’re debugging code running in your Docker service!
Update after reading “Django for Professionals”
To keep my Django knowledge “current” I read books from to time. I go to know about author Will Vincent from the DjangoChat podcast.
Being experienced in Django I opted for Django for Professionals.
It is refreshing to start directly with Docker. Chapter 1 goes in detail about what Docker is. And how to have your Django application run on it from the “Hello World” get go.
I learnt some things I didn’t know about Docker, even after having written this post. For example:
ENV PYTHONDONTWRITEBYTECODE 1
prevents Python from producing
The only caveat is that I do not use pipenv. I had asked on reddit about having pipenv handle production only requirements. The response at the time was underwhelming. I like having finer-grained control over what packages run locally in my native dev environment. And which packages to run elsewhere, i.e. staging/prod.
But this will change. Why? In time I want to transition to running things locally in Docker as well. I’m not there yet right now. YMMV.
Django for Professionals uses
pipenv. Its deterministic build feature is indeed attractive:
The benefit of a lock file is that this leads to a deterministic build: no matter how many times you install the software packages, you’ll have the same result. Without a lock file that “locks down” the dependencies and their order, this is not necessarily the case. Which means that two team members who install the same list of software packages might have slightly different build installations.
But it shouldn’t stop you to translate this to whatever package management tool you’re using.
Another lesson learnt is when the author asked for the parts of your files that change frequently to be last:
That way we only have to regenerate that part of the image when a change happens, not reinstall everything each time there is a change
One gripe I have about defaulting to
psycopg2-binary package. I would like to have
psycopg2 in production. Because according to this:
The psycopg2-binary package is meant for beginners to start playing with Python and PostgreSQL without the need to meet the build requirements.
If you are the maintainer of a published package depending on psycopg2 you shouldn’t use psycopg2-binary as a module dependency. For production use you are advised to use the source distribution.
I’m not sure whether this applies to all applications or only to package publishers. I reached out to the author about this. Will update as soon as I get a reply.
Update 2020-01-08: Will Vincent’s reply on this:
psycopg2-binarythat’s more a quirk of Pipenv. It’s something in flux. I think with just pip not using binary is fine.
Therefore, if easy in your situation, just go with
This tutorial is aimed at getting you started and showing you the basics. I provided pointers along the way if you want to dive deeper.
It doesn’t intend to show you all that can be done with Docker. Far from it.
The “tech ops” landscape is continuously changing. And I’m confident that many commands (or their arguments) will become outdated in no time.