Skip to main content
  1. All Posts/

pipework

Tools Shell

Pipework

Software-Defined Networking for Linux Containers
Pipework lets you connect together containers in arbitrarily complex scenarios.
Pipework uses cgroups and namespace and works with “plain” LXC containers
(created with lxc-start), and with the awesome Docker.

Table of Contents generated with DocToc

Things to note

vCenter / vSphere / ESX / ESXi

If you use vCenter / VSphere / ESX / ESXi, set or ask your administrator
to set Network Security Policies of the vSwitch as below:

  • Promiscuous mode: Accept
  • MAC address changes: Accept
  • Forged transmits: Accept

After starting the guest OS and creating a bridge, you might also need to
fine-tune the br1 interface as follows:

  • brctl stp br1 off (to disable the STP protocol and prevent the switch
    from disabling ports)
  • brctl setfd br1 2 (to reduce the time taken by the br1 interface to go
    from blocking to forwarding state)
  • brctl setmaxage br1 0

Virtualbox

If you use VirtualBox, you will have to update your VM network settings.
Open the settings panel for the VM, go the the “Network” tab, pull down the
“Advanced” settings. Here, the “Adapter Type” should be pcnet (the full
name is something like “PCnet-FAST III”), instead of the default e1000
(Intel PRO/1000). Also, “Promiscuous Mode” should be set to “Allow All”.
If you don’t do that, bridged containers won’t work, because the virtual
NIC will filter out all packets with a different MAC address. If you are
running VirtualBox in headless mode, the command line equivalent of the above
is modifyvm --nicpromisc1 allow-all. If you are using Vagrant, you can add
the following to the config for the same effect:

config.vm.provider "virtualbox" do |v|
  v.customize ['modifyvm', :id, '--nictype1', 'Am79C973']
  v.customize ['modifyvm', :id, '--nicpromisc1', 'allow-all']
end

Note: it looks like some operating systems (e.g. CentOS 7) do not support
pcnet anymore. You might want to use the virtio-net (Paravirtualized
Network) interface with those.

Docker

Before using Pipework, please ask on the docker-user mailing list if there is a “native”
way to achieve what you want to do without Pipework.

In the long run, Docker will allow complex scenarios, and Pipework should
become obsolete.
If there is really no other way to plumb your containers together with
the current version of Docker, then okay, let’s see how we can help you!
The following examples show what Pipework can do for you and your containers.

LAMP stack with a private network between the MySQL and Apache containers

Let’s create two containers, running the web tier and the database tier:

APACHE=$(docker run -d apache /usr/sbin/httpd -D FOREGROUND)
MYSQL=$(docker run -d mysql /usr/sbin/mysqld_safe)

Now, bring superpowers to the web tier:

pipework br1 $APACHE 192.168.1.1/24

This will:

  • create a bridge named br1 in the docker host;
  • add an interface named eth1 to the $APACHE container;
  • assign IP address 192.168.1.1 to this interface,
  • connect said interface to br1.

Now (drum roll), let’s do this:

pipework br1 $MYSQL 192.168.1.2/24

This will:

  • not create a bridge named br1, since it already exists;
  • add an interface named eth1 to the $MYSQL container;
  • assign IP address 192.168.1.2 to this interface,
  • connect said interface to br1.

Now, both containers can ping each other on the 192.168.1.0/24 subnet.

Docker integration

Pipework can resolve Docker containers names. If the container ID that
you gave to Pipework cannot be found, Pipework will try to resolve it
with docker inspect. This makes it even simpler to use:

docker run -name web1 -d apache
pipework br1 web1 192.168.12.23/24

Peeking inside the private network

Want to connect to those containers using their private addresses? Easy:

ip addr add 192.168.1.254/24 dev br1

Voilà!

Setting container internal interface

By default pipework creates a new interface eth1 inside the container. In case you want to change this interface name like eth2, e.g., to have more than one interface set by pipework, use:
pipework br1 -i eth2 ...
Note:: for InfiniBand IPoIB interfaces, the default interface name is ib0 and not eth1.

Setting host interface name

By default pipework will create a host-side interface with a fixed prefix but random suffix. If you would like to specify this interface name use the -l flag (for local):
pipework br1 -i eth2 -l hostapp1 ...

Using a different netmask

The IP addresses given to pipework are directly passed to the ip addr
tool; so you can append a subnet size using traditional CIDR notation.
I.e.:

pipework br1 $CONTAINERID 192.168.4.25/20

Don’t forget that all containers should use the same subnet size;
pipework is not clever enough to use your specified subnet size for
the first container, and retain it to use it for the other containers.

Setting a default gateway

If you want outbound traffic (i.e. when the containers connects
to the outside world) to go through the interface managed by
Pipework, you need to change the default route of the container.
This can be useful in some usecases, like traffic shaping, or if
you want the container to use a specific outbound IP address.
This can be automated by Pipework, by adding the gateway address
after the IP address and subnet mask:

pipework br1 $CONTAINERID 192.168.4.25/20@192.168.4.1

Connect a container to a local physical interface

Let’s pretend that you want to run two Hipache instances, listening on real
interfaces eth2 and eth3, using specific (public) IP addresses. Easy!

pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24

Note that this will use macvlan subinterfaces, so you can actually put
multiple containers on the same physical interface. If you don’t want to
virtualize the interface, you can use the --direct-phys option to namespace
an interface exclusively to a container without using a macvlan bridge.

pipework --direct-phys eth1 $CONTAINERID 192.168.1.2/24

This is useful for assigning SR-IOV VFs to containers, but be aware of added
latency when using the NIC to switch packets between containers on the same host.

Use MAC address to specify physical interface

In case you want to connect a local physical interface with a specific name inside
the container, it will also rename the physical one, this behaviour is not
idempotent:

pipework --direct-phys eth1 -i container0 $CONTAINERID 0/0
# second call would fail because physical interface eth1 has been renamed

We can use the interface MAC address to identify the interface the same way
any time (udev networking rules use a similar method for interfaces persistent
naming):

pipework --direct-phys mac:00:f3:15:4a:42:c8 -i container0 $CONTAINERID 0/0

Let the Docker host communicate over macvlan interfaces

If you use macvlan interfaces as shown in the previous paragraph, you
will notice that the host will not be able to reach the containers over
their macvlan interfaces. This is because traffic going in and out of
macvlan interfaces is segregated from the “root” interface.
If you want to enable that kind of communication, no problem: just
create a macvlan interface in your host, and move the IP address from
the “normal” interface to the macvlan interface.
For instance, on a machine…