…Docker and devicemapper`s thinpool in RHEL 7

I’ve been working with Docker this week, for an OpenShift v3 demo, and I’ve been struggling with storage for docker, so here are my notes, just in case anyonw needs the, or I do need them again.
Docker in RHEL7 is recommended to use devicemapper storage drive with thin provisioning. I was setting up some Vagrant boxes for my environment, and I was running into issues with pulling down of images never finishing, or errors while writing into the docker storage. It seems that my VM was created with a very small ammount of disc space for docker, so it could not properly run. This is how I diagnosed the problem and how I fixed it.
Kudos to Nick Strugnell for helping me out.

Diagnose

Once I did a docker pull, and got stuck, I needed to know what was the problem, so first thing, inspect LVM to see configuration.
LVM is Logical VOlume Manager and has 3 concepts:
  • PV (Physical Volume): This is the classic HDD
  • VG (Volume Group): This can span multiple Physical Volumes
  • LV (Logical Volume): This would be the volumes, directly usable by the apps.
For every type, there is a set of commands, easy to understand that help us:
  • pvs (Physical Volume Summary), pvcreate (create a Physical Volume), pvchange, pvck, pvdisplay, pvmove, pvremove, pvresize, pvscan
  • vgs (Volume Group summary), vgcfgbackup, vgchange, vgconvert, vgdisplay, vgextend, vgimportclone, vgmknodes, vgremove, vgsplit, vgcfgrestore, vgck, vgcreate, vgexport, vgimport, vgmerge, vgreduce, vgrename, vgscan
  • lvs (Logical Volume Summary), lvchange, lvcreate, lvextend, lvremove, lvresize, lvscan, lvconvert, lvdisplay, lvreduce, lvrename
I did a summary of my VM:
[root@ose3-helper ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 VolGroup00 lvm2 a-- 9.78g 0

[root@ose3-helper ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 3 0 wz--n- 9.78g 0

[root@ose3-helper ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao---- 7.97g
LogVol01 VolGroup00 -wi-ao---- 1.50g
docker-pool VolGroup00 twi-aot-M- 256.00m 100.00 0.22
It looks like my docker-pool is full, and very small. So here is the problem.

Why is it using a docker-pool LV?

In RHEL 7 docker is configured to run with devicemapper, as seen here:
[root@ose3-helper ~]# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=-s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/VolGroup00-docker--pool

How can I configure devicemapper for docker?

In order to use dm.thinpooldev you must have an LVM thinpool available, the docker-storage-setup package will assist you in configuring LVM. However you must provision your host to fit one of these three scenarios :
  • Root filesystem on LVM with free space remaining on the volume group. Run docker-storage-setup with no additional configuration, it will allocate the remaining space for the thinpool.
  • A dedicated LVM volume group where you’d like to reate your thinpool
echo <<EOF > /etc/sysconfig/docker-storage-setup
VG=docker-vg
SETUP_LVM_THIN_POOL=yes
EOF
docker-storage-setup
  • A dedicated block device, which will be used to create a volume group and thinpool
cat <<EOF > /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdc
VG=docker-vg
SETUP_LVM_THIN_POOL=yes
EOF
docker-storage-setup
Once complete you should have a thinpool named docker-pool and docker should be configured to use it in /etc/sysconfig/docker-storage.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool docker-vg twi-a-tz-- 48.95g 0.00 0.44

# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/openshift--vg-docker--pool
If you had previously used docker with loopback storage you should clean out /var/lib/docker This is a destructive operation and will delete all images and containers on the host.
systemctl stop docker
rm -rf /var/lib/docker/*
systemctl start docker
This topic is completelly taken from Erik Jacobs OSEv3 training. So kudos to him.

Solution

As I didn’t have enough free space in my VG and I couldn’t unmount LogVol00 to reduce the size what I did was:
  • Add a second drive to the KVM VM (With VirtManager, although virsh should work the same)
  • Add the PV
  • Resize the VG to consume the just added PV
  • Two options:
    • Resize the docker LV (easier)
    • Delete the docker LV and recreate it.
      • Stop docker
      • Delete /var/lib/docker/*
      • Delete the docker LV
      • rerun the docker-storage-setup to reconfigure the docker LV to have all the added space
      • Start docker

Add a second drive to the KVM VM

With Virt Manager, Just select the VM to where you want to add the drive, “open the terminal for the VM”, press configuration (the light bulb), and click on “+ Add Hardware”. Select the size, and VirtIO as the bus.

Add the PV

To see the name of the new disc, you can cat /proc/partitions:
[root@ose3-helper ~]# cat /proc/partitions
major minor #blocks name

252 0 11534336 vda
252 1 1024 vda1
252 2 204800 vda2
252 3 10278912 vda3
253 0 1572864 dm-0
253 1 8355840 dm-1
253 2 32768 dm-2
253 3 262144 dm-3
253 4 262144 dm-4
253 5 10485760 dm-5
252 16 8388608 vdb
We can see that the disc I just added is vdb, so I will add a PV for it with pvcreate:
[root@ose3-helper ~]# pvcreate /dev/vdb
Physical volume "/dev/vdb" successfully created
And list it with pvs:
[root@ose3-helper ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 VolGroup00 lvm2 a-- 9.78g 0
/dev/vdb lvm2 --- 8.00g 8.00g

Resize the VG to consume the just added PV

Now I need to make the VG span to this PV, so I will vgextend (I will list before and after to see the changes):
[root@ose3-helper ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 3 0 wz--n- 9.78g 0

[root@ose3-helper ~]# vgextend VolGroup00 /dev/vdb
Volume group "VolGroup00" successfully extended

[root@ose3-helper ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 2 3 0 wz--n- 17.75g 7.97g
Now I see the 8 GB that I added, as Free.

Resize the docker LV

If you prefer just to extend the volume, this is the command:
[root@ose3-helper ~]# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
Size of logical volume VolGroup00/docker-pool_tdata changed from 480.00 MiB (15 extents) to 7.75 GiB (248 extents).
Logical volume docker-pool successfully resized

[root@ose3-helper ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao---- 7.97g
LogVol01 VolGroup00 -wi-ao---- 1.50g
docker-pool VolGroup00 twi-a-t--- 7.75g 3.23 0.22

Delete the docker LV and recreate it

As I need to remove the docker LV so it can be recreated through the docker-storage-setup script, I need to stop the process, and remove what it was there:
[root@ose3-helper ~]# systemctl stop docker

[root@ose3-helper ~]# rm -rf /var/lib/docker/*
Now I will remove the docker LV so I can recreate it fully
[root@ose3-helper ~]# lvremove VolGroup00/docker-pool
Do you really want to remove active logical volume docker-pool? [y/n]: y
Logical volume "docker-pool" successfully removed
And now I will recreate it with the script:
[root@ose3-helper ~]# echo <<EOF > /etc/sysconfig/docker-storage-setup
> SETUP_LVM_THIN_POOL=yes
> EOF

[root@ose3-helper ~]# docker-storage-setup
Rounding up size to full physical extent 32.00 MiB
Logical volume "docker-poolmeta" created.
Logical volume "docker-pool" created.
WARNING: Converting logical volume VolGroup00/docker-pool and VolGroup00/docker-poolmeta to pool's data and metadata volumes.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Converted VolGroup00/docker-pool to thin pool.
Logical volume "docker-pool" changed.

[root@ose3-helper ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao---- 7.97g
LogVol01 VolGroup00 -wi-ao---- 1.50g
docker-pool VolGroup00 twi-a-t--- 4.94g 0.00 0.11
It seems like this second option, as it uses thin provisioning, it doesn’t assign the whole available space to the docker-pool.

…How to setup JBoss EAP with RHEL 7 (systemd linuxes)

JBoss EAP (or Wildfly) has an init.d script that does not play well with systemd startup systems. To configure it, it is as easy as following this simple steps.
1- Create a group and user for the JBoss EAP process (username, uid, gid, and home to your preferences)
groupadd -r jboss -g 1000
useradd -u 1000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
2- Create the home folder for the user
chown -R jboss:jboss /opt/jboss
3- Create configuration directory for the JBoss EAP instance, create the configuration file for the EAP instance (of course, your values here), and then set appropriate permissions for the used folders.
mkdir /etc/jboss-as

cat > /etc/jboss-as/jboss-as.conf <<EOF
JBOSS_USER=jboss
STARTUP_WAIT=30
SHUTDOWN_WAIT=30
JBOSS_CONSOLE_LOG=/var/log/jboss-as/console.log
JBOSS_HOME=/usr/share/jboss-as/jboss-eap-6.X
EOF

mkdir /var/log/jboss-as
mkdir /var/run/jboss-as
chown -R jboss:jboss /var/log/jboss-as
chown -R jboss:jboss /var/run/jboss-as
4- Create the service file
cat > /etc/systemd/system/jboss-as-standalone.service <<EOF
[Unit]
Description=Jboss Application Server
After=syslog.target network.target

[Service]
Type=forking
ExecStart=/usr/share/jboss-as/bin/init.d/jboss-as-standalone.sh start
ExecStop=/usr/share/jboss-as/bin/init.d/jboss-as-standalone.sh stop

[Install]
WantedBy=multi-user.target
EOF
5- Restart the systemctl daemon, start the service, verify it’s status and enable the service
systemctl daemon-reload
systemctl start jboss-as-standalone.service
systemctl status jboss-as-standalone.service
systemctl enable jboss-as-standalone.service
6- Additionally, if you need to create a firewalld rules for the EAP, do:
cat > /etc/firewalld/services/jboss-as-standalone.xml
<?xml version="1.0" encoding="utf-8"?>
<service version="1.0">
<short>jboss-as-standalone</short>
<port port="8080" protocol="tcp"/>
<port port="8443" protocol="udp"/>
<port port="8009" protocol="tcp"/>
<port port="4447" protocol="tcp"/>
<port port="9990" protocol="udp"/>
<port port="9999" protocol="tcp"/>
</service>
EOF

firewall-cmd --zone=public --add-service=jboss-as-standalone
firewall-cmd --permanent --zone=public --add-service=jboss-as-standalone
firewall-cmd --zone=public --list-services
firewall-cmd --permanent --zone=public --list-services