Basic networking
Let’s take a quick look into the networking aspect of Docker containers.
Every installation of the Docker engine creates three default networks. To see them
$ docker network ls
NETWORK ID NAME DRIVER
38ac3769a99d bridge bridge
13d0ffe33e54 none null
32769c70ad64 host host
In this section we will be working with the default bridge network and bridge networks in general. For more information about what these networks are you can check here.
When a container is created, it is automatically added to the default bridge network, unless specified otherwise.
Let’s create a simple container, based on the ubuntu image and name it networktest
$ docker run -itd --name=networktest ubuntu
To inspect the container details
$ docker inspect network test
...
"NetworkSettings": {
"Bridge": "",
...
"IPAddress": "172.17.0.4",
We can see this from the network side as well
$ docker network inspect bridge
...
"Containers": {
...
"Name": "networktest",
...
"IPv4Address": "172.17.0.4/16",
In order to disconnect the container from the network
$ docker network disconnect bridge networktest
If you now inspect either the network or the container you will see that it is no longer connected to bridge and the ip address is gone.
We can also create our own networks. Let’s create a bridge network
$ docker network create -d bridge my_bridge_network
8248621a64e66316597427fbd901282b698631e6abf4a1b2109d4bd1b2b1baf7
The big hexadecimal number is just the element ID and we do not care about it for now.List your networks and you will see your new creation
$ docker network ls
NETWORK ID NAME DRIVER
32769c70ad64 host host
38ac3769a99d bridge bridge
8248621a64e6 my_bridge_network bridge
13d0ffe33e54 none null
To connect our container to the new network
$ docker network connect my_bridge_network networktest
$ docker network inspect my_bridge_network
[
{
"Name": "my_bridge_network",
"Id": "8248621a64e66316597427fbd901282b698631e6abf4a1b2109d4bd1b2b1baf7",
...
"Containers": {
"48ed58f9d6de012c3573f7a2431a82637926674da0c61e30c5dc445d7a67b36f": {
"Name": "networktest",
...
"IPv4Address": "172.18.0.2/16",
Our container is now connected to my_bridge_network and it also has an ip, 172.18.0.2 on a different subnet this time, which makes sense because it is a network different from the default bridge one.
Name resolution
Docker container networking automatically supports DNS resolutions. Let’s give it a spin.
First pull an image that contains a networking toolkit (basic network commands like ifconfig, ping etc.
$ docker pull ubuntu:14.04
We will now create two containers based on that image
$ docker run –itd –-name container1 ubuntu:14.04
$ docker run –itd –-name container2 ubuntu:14.04
As we said before, these containers are connected to the default bridge network
$ docker network inspect bridge
...
"Name": "container1",
...
"IPv4Address": "172.17.0.4/16",
...
"Name": "container2",
...
"IPv4Address": "172.17.0.5/16",
Let’s attach to one of them and try to ping the other
$ docker attach container1
root@81ae5ed9ddc0:/# ping 172.17.0.5
PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data.
4 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.220 ms
That worked because they are on the same network. But the idea was to try name resolution. Let’s try to ping using the container name (note: with the term name, we do not refer to the hostname, but simply the name we have given to our container)
root@81ae5ed9ddc0:/# ping container2
ping: unknown host container2
Aha, it failed! There is a simple explanation for that and it is that the default bridge network does not support automatic name resolution. Therefore, we will have to create our own network, connect the containers and then try again. If you still have my_bridge_network available you can skip the first step
$ docker network connect my_bridge_network container1
$ docker network connect my_bridge_network container2
$ docker attach container1
root@81ae5ed9ddc0:/# ping container2
PING container2 (172.18.0.3) 56(84) bytes of data.
64 bytes from container2.my_bridge_network (172.18.0.3): icmp_seq=1 ttl=64 time=0.161 ms
Now the ping passes.
Let’s have a quick look at the network interfaces of container1
root@81ae5ed9ddc0:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:04
inet addr:172.17.0.4 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:24 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1819 (1.8 KB) TX bytes:1096 (1.0 KB)
eth1 Link encap:Ethernet HWaddr 02:42:ac:12:00:02
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe12:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1576 (1.5 KB) TX bytes:928 (928.0 B)
We see that we have two interfaces. The reason is that our container is still connected to the default bridge network. If you are wondering whether the last ping passed from that or my_bridge_network, the answer is that since the default bridge does not support name resolution, we could not have pinged by name. You can disconnect them from that and the ping will still work.