So I've ended with this ZFS and Docker combination on my home storage some time ago. I use ZFS on Linux to safely store data and I use Docker to run services like samba, plex, owncloud, and others on top of that data.
I've been using devicemapper on top of ZFS and it was mostly good. Every now and then I had this strange issue of layer not being available while starting container, (as if docker tried to spin up the container before having the storage ready for it) just re-running the container solved the issue. I've never seen this at work, where we use Docker in production quite a lot and in my case it was transparently handled by upstart so I've just assumed the combination of devicemapper and ZFS caused these. Besides that I was quite happy with the whole system.
Recently I was wondering if there was some progress on the ZFS storage backend and I was pleasantly surpriset, that this is actually built in sice couple versions ago. (how did I miss that, I have no idea) So let's try it?
From my experience with migration between aufs and devicemapper, it's
easier to start with clean table (if that's an option) so I've just
stopped all running containers, stopped docker daemon, created new blank
zfs filesystem for docker and set the --storage-driver=zfs
flag before
restarting docker daemon again. (In /etc/default/docker
on Ubuntu)
Now I just had to fire couple commands to clone repositories with Dockerfiles, build images and then just start them as usual via upstart. I might get to describe the configuration in greater deail in some other post. (or perhaps after I move all this to Ansible)
I didn't do much benchmarking as storage performance isn't that critical for my use case, so I'm really comparing with devicemapper based on general observation. To spin up a container (during build or a regular container start) docker creates zfs clone of the image in the backgrould. This phase takes a little bit longer than devicemapper based start, but nothing to worry about. (around ~1s on my configuration) After that the storage access speed seems to be simmilar.
One think that I've noticed quite early after switchover is that you'll
end up with a lot of filesystems (~100 with 4 containers runnig
currently) under your base docker filesystem. These all show in the zfs list
. Not a biggie, but it clutters the storage management a bit. But
hey, I wanted the integration, so here I have it - you can also use this
to see sizes of individual "layers" so it's easy to spot which one might
be filling up your precious disk space for or which one could you
perhaps optimize a bit in your Dockerfile for example.
So here we are. I don't see this being used a lot in production yet - just because ZFS and Docker rarely meet on the same machine in production environment. We'll see how that might change with Canonical pushing for ZFS in its own distribution. In combination with ZFS send and other functionality, there are some interesting possibilities there. For this home storage configuration, that I have here, it's very nice combination. The migration is painless and once you're done there are no practical differences in terms of usage, that I could spot.
This article is part of Linux category. Last 16 articles in the category:
You can also see all articles in Linux category or subscribe to the RSS feed for this category.