As you probably know, Docker is a popular way to deploy and develop software. If you use it for this second purpose, it's also insanely easy to inadvertently make your services available to more people than you, and too hard to ensure it doesn't.
(There's nothing really new is in this post, it's just a quick rant and summary)
The first day
So, you install Docker on your development machine and you run something like this:
$ docker run -p 80:80 myproject
And now everyone on your network (or even the entire Internet) can access your project by connecting to your machine on port 80, because the default binding address of Docker is 0.0.0.0
(listen to all interfaces).
I can understand this, because you may want to use Docker on an actual server. But this is your development machine, so you only want your service accessible on loopback. So you decide to solve the problem for good, and set the ip
option on the /etc/docker/daemon.json
configuration file (the "default IP when binding container ports") to 127.0.0.1
and call it a day, right?
The next day
With time your project grows, and you switch to docker-compose
, so you run something like this:
$ cat docker-compose.yml
version: '3.3'
services:
myproject:
ports:
- '80:80'
image: myproject
$ docker-compose up
And now your project is again accessible to everyone on your network. This is docker-compose issue #2999. The problem is that the ip
option you configured before actually only applies to the default network, and docker-compose
generally creates a new network for your project, which will bind again to 0.0.0.0
. And as far as I know, there's no way to configure the default binding address for new networks.
Use a firewall!
So you decide to stop trying to configure Docker and use a firewall instead. You install ufw which is very easy to install and configure. Problem solved, right?
Well, it turns out that Docker by default sets up iptables in a way that bypasses UFW, so you are going to have to apply some extra configuration to make it work.
Non-solutions
-
Always put the loopback IP on your docker commands and docker-compose.yml files: This is too error prone and you are eventually going to make an human mistake and expose your services to the outside world. And it's painful to have to change every port binding between development and production machines.
-
Don't you have an home router which also acts as a firewall? Yes, but it doesn't help if you have guests, or some other machine on your local network is compromised, or if there's some bug that allows the firewall to be bypassed (your random home router is probably not the best firewall on the market). And my philosophy is that you shouldn't rely on a firewall as the first and only security measure.
-
Use something else: Even with those problems, Docker is often the best option overall. And you don't always make those decisions.
Actual solutions
As far as I know, if you don't want to rely on some workaround, your best chance is to use firewalld (which Docker integrated with), a hardware/external firewall, or an isolated virtual machine.
UPDATE (2021-10-15): Nowadays, in addition to trying to avoid binding to all interfaces as much as I can, my go-to solution for development machines is using Docker in rootless mode instead, launching the daemon as an unprivileged user instead of root. Then, by design, Docker can no longer bypass the firewall. Be warned though, that rootless mode can be somewhat painful to set up if you aren't on a rolling/fast-moving Linux distro.
Final note
I am not an expert in Docker networking. There may be some simpler way to fix this. But even if a simple solution actually exists, the fact that there are multiple options that at first sight ought to work but don't shows that this is a problem. How many developers are one (or zero) steps away from exposing their services?