Storage for Containers using Gluster – Part II

Standard

storage-containers-e1482519295355

Overview

This is a four part series dedicated to container storage. This article is a collaboration between Daniel Messer (Technical Marketing Manager Storage @RedHat) and Keith Tenzer (Solutions Architect @RedHat).
  • Storage for Containers Overview – Part I
  • Storage for Containers using Gluster – Part II
  • Storage for Containers using Ceph – Part III (coming soon)
  • Storage for Containers using NetApp – Part IV (coming soon)
  • Storage for Containers using Container Native Storage – Part V (coming soon)

Gluster as Container-Ready Storage (CRS)

In this article we will look at one of the first options of storage for containers and how to deploy it. Support for GlusterFS has been in Kubernetes and OpenShift for some time. GlusterFS is a good fit because it is available across all deployment options: bare-metal, virtual, on-premise and public cloud. The recent addition of GlusterFS running in a container will be discussed in Part V of this series.GlusterFS is a distributed filesystem at heart with a native protocol (GlusterFS) and various other protocols (NFS, SMB,…). For integration with OpenShift, nodes will use the native protocol via FUSE to mount GlusterFS volumes on the node itself and then have them bind-mount’ed into the target containers. OpenShift/Kubernetes has a native provisioner that implements requesting, releasing and (un-)mounting GlusterFS volumes.

CRS Overview

On the storage side there is an additional component managing the cluster at the request of OpenShift/Kubernetes called “heketi”. This is effectively a REST API for GlusterFS and also ships a CLI version. In the following steps we will deploy heketi among 3 GlusterFS nodes, use it to deploy a GlusterFS storage pool, connect it to OpenShift and use it to provision storage for containers via PersistentVolumeClaims. In total we will deploy 4 Virtual Machines. One for OpenShift (lab setup) and three for GlusterFS.Note: your system should have at least a quad-core CPU, 16GB RAM and 20 GB of free disk space.

Deploying OpenShift

At first you will need an OpenShift deployment. An All-in-One deployment in a VM is sufficient, instructions can be found in the “OpenShift Enterprise 3.4 all-in-one Lab Environment” article.Make sure your OpenShift VM can resolve external domain names. Edit /etc/dnsmasq.conf and add the following line to use Google DNS:
server=8.8.8.8
Restart it:
# systemctl restart dnsmasq
# ping -c1 google.com

Deploying Gluster

For GlusterFS at least 3 VMs are required with the following specs:
  • RHEL 7.3
  • 2 CPUs
  • 2 GB RAM
  • 30 GB disk for OS
  • 10 GB disk for GlusterFS bricks
It is necessary to provide local name resolution for the 3 VMs and the OpenShift VM via a common /etc/hosts file.For example (feel free to adjust the domain and host names to your environment):
# cat /etc/hosts
127.0.0.1      localhost localhost.localdomain localhost4 localhost4.localdomain4
::1            localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.99.144  ocp-master.lab ocp-master
172.16.128.7   crs-node1.lab crs-node1
172.16.128.8   crs-node2.lab crs-node2
172.16.128.9   crs-node3.lab crs-node3
Execute the following steps on all 3 GlusterFS VMs:
# subscription-manager repos --disable="*"
# subscription-manager repos --enable=rhel-7-server-rpms
If you have a GlusterFS subscription you can use it and enable the rh-gluster-3-for-rhel-7-server-rpms repository.If you don’t you can use the unsupported GlusterFS community repositories for testing via EPEL:
# yum -y install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm --import http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
Create a file named glusterfs-3.10.repo in /etc/yum.repos.d/
[glusterfs-3.10]
name=glusterfs-3.10
description="GlusterFS 3.10 Community Version"
baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/
gpgcheck=0
enabled=1
Verify the repository is active:
# yum repolist
You should now be able to install GlusterFS
# yum -y install glusterfs-server
A couple of basic TCP ports need to be opened for GlusterFS peers to communicate and provide storage to OpenShift:
# firewall-cmd --add-port=24007-24008/tcp --add-port=49152-49664/tcp --add-port=2222/tcp
# firewall-cmd --runtime-to-permanent
Now we are ready to start the GlusterFS daemon:
# systemctl enable glusterd
# systemctl start glusterd
That’s it. GlusterFS is up and running. The rest of the configuration will be done via heketi.Install heketi on one of the GlusterFS VMs:
[root@crs-node1 ~]# yum -y install heketi heketi-client

Update for for EPEL users

If you don’t have a Red Hat Gluster Storage subscription you will get heketi from EPEL. At the time of writing this is version 3.0.0-1.el7 from October 2016 which is not working with OpenShift 3.4. You will need to update to a more current version:
[root@crs-node1 ~]# yum -y install wget
[root@crs-node1 ~]# wget https://github.com/heketi/heketi/releases/download/v4.0.0/heketi-v4.0.0.linux.amd64.tar.gz
[root@crs-node1 ~]# tar -xzf heketi-v4.0.0.linux.amd64.tar.gz
[root@crs-node1 ~]# systemctl stop heketi
[root@crs-node1 ~]# cp heketi/heketi* /usr/bin/
[root@crs-node1 ~]# chown heketi:heketi /usr/bin/heketi*
Create a file in /etc/systemd/system/heketi.service for the updated syntax of the v4 heketi binary:
[Unit]
Description=Heketi Server

[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
EnvironmentFile=-/etc/heketi/heketi.json
User=heketi
ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog

[Install]
WantedBy=multi-user.target
[root@crs-node1 ~]# systemctl daemon-reload
[root@crs-node1 ~]# systemctl start heketi

Heketi will use SSH to configure GlusterFS on all nodes. Create an SSH key pair and copy the public key to all 3 nodes (including the first node you are logged on):
[root@crs-node1 ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
[root@crs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@crs-node1.lab
[root@crs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@crs-node2.lab
[root@crs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@crs-node3.lab
[root@crs-node1 ~]# chown heketi:heketi /etc/heketi/heketi_key*
The only thing left is to configure heketi to use SSH. Edit the /etc/heketi/heketi.json to look like the below (changed parts highlighted are underlined):
{
   "_port_comment":"Heketi Server Port Number",
   "port":"8080",
   "_use_auth":"Enable JWT authorization. Please enable for deployment",
   "use_auth":false,
   "_jwt":"Private keys for access",
   "jwt":{
      "_admin":"Admin has access to all APIs",
      "admin":{
         "key":"My Secret"
      },
      "_user":"User only has access to /volumes endpoint",
      "user":{
         "key":"My Secret"
      }
   },
   "_glusterfs_comment":"GlusterFS Configuration",
   "glusterfs":{
      "_executor_comment":[
         "Execute plugin. Possible choices: mock, ssh",
         "mock: This setting is used for testing and development.",
         " It will not send commands to any node.",
         "ssh: This setting will notify Heketi to ssh to the nodes.",
         " It will need the values in sshexec to be configured.",
         "kubernetes: Communicate with GlusterFS containers over",
         " Kubernetes exec api."
      ],
      "executor":"ssh",
      "_sshexec_comment":"SSH username and private key file information",
      "sshexec":{
         "keyfile":"/etc/heketi/heketi_key",
         "user":"root",
         "port":"22",
         "fstab":"/etc/fstab"
      },
      "_kubeexec_comment":"Kubernetes configuration",
      "kubeexec":{
         "host":"https://kubernetes.host:8443",
         "cert":"/path/to/crt.file",
         "insecure":false,
         "user":"kubernetes username",
         "password":"password for kubernetes user",
         "namespace":"OpenShift project or Kubernetes namespace",
         "fstab":"Optional: Specify fstab file on node. Default is /etc/fstab"
      },
      "_db_comment":"Database file name",
      "db":"/var/lib/heketi/heketi.db",
      "_loglevel_comment":[
         "Set log level. Choices are:",
         " none, critical, error, warning, info, debug",
         "Default is warning"
      ],
      "loglevel":"debug"
   }
}
Finished. heketi will listen on port 8080, let’s make sure the firewall allows that:
# firewall-cmd --add-port=8080/tcp
# firewall-cmd --runtime-to-permanent
Now restart heketi:
# systemctl enable heketi
# systemctl restart heketi
Test if it’s running:
# curl http://crs-node1.lab:8080/hello
Hello from Heketi
Good. Time to put heketi to work. We will use it to configure our GlusterFS storage pool. The software is already running on all our VMs but it’s unconfigured. To change that to a functional storage system we will describe our desired GlusterFS storage pool in a topology file, like below:
# vi topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "crs-node1.lab"
              ],
              "storage": [
                "172.16.128.7"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "crs-node2.lab"
              ],
              "storage": [
                "172.16.128.8"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "crs-node3.lab"
              ],
              "storage": [
                "172.16.128.9"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        }
      ]
    }
  ]
}
Despite the formatting the file is relatively simple. It basically tells heketi to create a 3 node cluster with each node being known by a FQDN, an IP address and with at least one spare block device which will be used as a GlusterFS brick.Now feed this file to heketi:
# export HEKETI_CLI_SERVER=http://crs-node1.lab:8080
# heketi-cli topology load --json=topology.json
Creating cluster ... ID: 78cdb57aa362f5284bc95b2549bc7e7d
 Creating node crs-node1.lab ... ID: ffd7671c0083d88aeda9fd1cb40b339b
 Adding device /dev/sdb ... OK
 Creating node crs-node2.lab ... ID: 8220975c0a4479792e684584153050a9
 Adding device /dev/sdb ... OK
 Creating node crs-node3.lab ... ID: b94f14c4dbd8850f6ac589ac3b39cc8e
 Adding device /dev/sdb ... OK
Now heketi has configured a 3 node GlusterFS storage pool. Easy! You can see that the 3 VMs have successfully formed what’s called a Trusted Storage Pool in GlusterFS
[root@crs-node1 ~]# gluster peer status
Number of Peers: 2

Hostname: crs-node2.lab
Uuid: 93b34946-9571-46a8-983c-c9f128557c0e
State: Peer in Cluster (Connected)
Other names:
crs-node2.lab

Hostname: 172.16.128.9
Uuid: e3c1f9b0-be97-42e5-beda-f70fc05f47ea
State: Peer in Cluster (Connected)
Now back to OpenShift!

Integrating Gluster with OpenShift

For integration in OpenShift two things are needed: a dynamic Kubernetes Storage Provisioner and a StorageClass. The provisioner ships out of the box with OpenShift. It does the actual heavy lifting of attaching storage to containers. The StorageClass is an entity that users in OpenShift can make PersistentVolumeClaims against, which will in turn trigger a provisioner to implement the actual provisioning and represent the result as Kubernetes PersistentVolume (PV).Like everything else in OpenShift the StorageClass is simply defined as a YAML file:
# cat crs-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
 name: container-ready-storage
 annotations:
 storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
 resturl: "http://crs-node1.lab:8080"
 restauthenabled: "false"
Our provisioner is kubernetes.io/glusterfs and we make it point to our heketi instance. We name the class “container-ready-storage” and at the same time make it the default StorageClass for all PersistentVolumeClaims that do not explicitly specify one.
Create the StorageClass for your GlusterFS pool:
# oc create -f crs-storageclass.yaml

Using Gluster with OpenShift

Let’s look at how we would use GlusterFS in OpenShift. First create a playground project on the OpenShift VM:
# oc new-project crs-storage --display-name="Container-Ready Storage"
To request storage in Kubernetes/OpenShift a PersistentVolumeClaim (PVC) is issued. It’s a simple object describing at the minimum how much capacity we need and in which access mode it should be supplied (non-shared, shared, read-only). It’s usually part of an application template but let’s just create a standalone PVC:
# cat crs-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: my-crs-storage
 namespace: crs-storage
spec:
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 1Gi
Issue the claim:
# oc create -f crs-claim.yaml
Watch the PVC being processed and fulfilled with a dynamically created volume in OpenShift:
# oc get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
my-crs-storage   Bound     pvc-41ad5adb-107c-11e7-afae-000c2949cce7   1Gi        RWO           58s
Great! You have now storage capacity available for use in OpenShift without any interaction with the storage system directly. Let’s look at the volume that got created:
# oc get pv/pvc-41ad5adb-107c-11e7-afae-000c2949cce7
Name:       pvc-41ad5adb-107c-11e7-afae-000c2949cce7
Labels:     
StorageClass:   container-ready-storage
Status:     Bound
Claim:      crs-storage/my-crs-storage
Reclaim Policy: Delete
Access Modes:   RWO
Capacity:   1Gi
Message:
Source:
    Type:       Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:  gluster-dynamic-my-crs-storage
    Path:       vol_85e444ee3bc154de084976a9aef16025
    ReadOnly:       false
The volume has been created specifically according to the design specifications in the PVC. In the PVC we did not explicitly specify which StorageClass we wanted to use because the GlusterFS StorageClass using heketi was defined as the system-wide default.What happened in the background was that when the PVC reached the system, our default StorageClass reached out to the GlusterFS Provisioner with the volume specs from the PVC. The provisioner in turn communicates with our heketi instance which facilitates the creation of the GlusterFS volume, which we can trace in it’s log messages:
[root@crs-node1 ~]# journalctl -l -u heketi.service
...
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] DEBUG 2017/03/24 11:25:52 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:298: Volume to be created on cluster e
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] INFO 2017/03/24 11:25:52 Creating brick 9e791b1daa12af783c9195941fe63103
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] INFO 2017/03/24 11:25:52 Creating brick 3e06af2f855bef521a95ada91680d14b
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] INFO 2017/03/24 11:25:52 Creating brick e4daa240f1359071e3f7ea22618cfbab
...
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [sshexec] INFO 2017/03/24 11:25:52 Creating volume vol_85e444ee3bc154de084976a9aef16025 replica 3
...
Mar 24 11:25:53 crs-node1.lab heketi[2598]: Result: volume create: vol_85e444ee3bc154de084976a9aef16025: success: please start the volume to access data
...
Mar 24 11:25:55 crs-node1.lab heketi[2598]: Result: volume start: vol_85e444ee3bc154de084976a9aef16025: success
...
Mar 24 11:25:55 crs-node1.lab heketi[2598]: [asynchttp] INFO 2017/03/24 11:25:55 Completed job c3d6c4f9fc74796f4a5262647dc790fe in 3.176522702s
...
Success! In just about 3 seconds the GlusterFS pool was configured and provisioned a volume. The default as of today is replica 3, which means the data will be replicated across 3 bricks (GlusterFS speak for backend storage) of 3 distinct nodes. The process is orchestrated via heketi on behalf of OpenShift.You can see this information on the volume from GlusterFS perspective:
[root@crs-node1 ~]# gluster volume list
vol_85e444ee3bc154de084976a9aef16025
[root@crs-node1 ~]# gluster volume info vol_85e444ee3bc154de084976a9aef16025

Volume Name: vol_85e444ee3bc154de084976a9aef16025
Type: Replicate
Volume ID: a32168c8-858e-472a-b145-08c20192082b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.128.8:/var/lib/heketi/mounts/vg_147b43f6f6903be8b23209903b7172ae/brick_9e791b1daa12af783c9195941fe63103/brick
Brick2: 172.16.128.9:/var/lib/heketi/mounts/vg_72c0f520b0c57d807be21e9c90312f85/brick_3e06af2f855bef521a95ada91680d14b/brick
Brick3: 172.16.128.7:/var/lib/heketi/mounts/vg_67314f879686de975f9b8936ae43c5c5/brick_e4daa240f1359071e3f7ea22618cfbab/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
Notice how the volume name in GlusterFS corresponds to the “path” of the Kubernetes Persistent Volume in OpenShift.Alternatively you can also use the OpenShift UI to provision storage, which allows you to conveniently select among all known StorageClasses in the system:Screen Shot 2017-03-23 at 21.50.34Screen Shot 2017-03-24 at 11.09.34.pngLet’s make this a little more interesting and run a workload on OpenShift.On our OpenShift VM still being in the crs-storage project:
# oc get templates -n openshift
You should see a nice list of application and database templates for easy consumption in OpenShift to get your app development project kickstarted.We will use  MySQL to demonstrate how to host a stateful application on OpenShift with persistent and elastic storage. The mysql-persistent template includes a PVC of 1G for the MySQL database directory. For demonstration purposes all default values are fine.
# oc process mysql-persistent -n openshift | oc create -f -
Wait for the deployment to finish. You can observe the progress in the UI or via
# oc get pods
NAME            READY     STATUS    RESTARTS   AGE
mysql-1-h4afb   1/1       Running   0          2m
Nice. This template created a service, secrets, a PVC and a pod. Let’s use it (your pod name will differ):
# oc rsh mysql-1-h4afb
You have successfully attached to the MySQL pod. Let’s connect to the database:
sh-4.2$ mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -h $HOSTNAME $MYSQL_DATABASE
Conveniently all vital configuration like MySQL credentials, database name, etc are part of environment variables in the pod template and hence available in the pod as shell environment variables too. Let’s create some data:
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| sampledb           |
+--------------------+
2 rows in set (0.02 sec)

mysql> \u sampledb
Database changed
mysql> CREATE TABLE IF NOT EXISTS equipment (
    ->     equip_id int(5) NOT NULL AUTO_INCREMENT,
    ->     type varchar(50) DEFAULT NULL,
    ->     install_date DATE DEFAULT NULL,
    ->     color varchar(20) DEFAULT NULL,
    ->     working bool DEFAULT NULL,
    ->     location varchar(250) DEFAULT NULL,
    ->     PRIMARY KEY(equip_id)
    ->     );
Query OK, 0 rows affected (0.13 sec)

mysql> INSERT INTO equipment (type, install_date, color, working, location)
    -> VALUES
    -> ("Slide", Now(), "blue", 1, "Southwest Corner");
Query OK, 1 row affected, 1 warning (0.01 sec)

mysql> SELECT * FROM equipment;
+----------+-------+--------------+-------+---------+------------------+
| equip_id | type  | install_date | color | working | location         |
+----------+-------+--------------+-------+---------+------------------+
|        1 | Slide | 2017-03-24   | blue  |       1 | Southwest Corner |
+----------+-------+--------------+-------+---------+------------------+
1 row in set (0.00 sec)
This means the database is functional. Great!Do you want to see where the data is stored? Easy! Look at the mysql volume that got created as part of the template:
# oc get pvc/mysql
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
mysql     Bound     pvc-a678b583-1082-11e7-afae-000c2949cce7   1Gi        RWO           11m
# oc describe pv/pvc-a678b583-1082-11e7-afae-000c2949cce7
Name:       pvc-a678b583-1082-11e7-afae-000c2949cce7
Labels:     
StorageClass:   container-ready-storage
Status:     Bound
Claim:      crs-storage/mysql
Reclaim Policy: Delete
Access Modes:   RWO
Capacity:   1Gi
Message:
Source:
    Type:       Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:  gluster-dynamic-mysql
    Path:       vol_6299fc74eee513119dafd43f8a438db1
    ReadOnly:       false
Note the path to GlusterFS volume name vol_6299fc74eee513119dafd43f8a438db1.Return to one of your GlusterFS VMs and issue:
# gluster volume info vol_6299fc74eee513119dafd43f8a438db

Volume Name: vol_6299fc74eee513119dafd43f8a438db1
Type: Replicate
Volume ID: 4115918f-28f7-4d4a-b3f5-4b9afe5b391f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.128.7:/var/lib/heketi/mounts/vg_67314f879686de975f9b8936ae43c5c5/brick_f264a47aa32be5d595f83477572becf8/brick
Brick2: 172.16.128.8:/var/lib/heketi/mounts/vg_147b43f6f6903be8b23209903b7172ae/brick_f5731fe7175cbe6e6567e013c2591343/brick
Brick3: 172.16.128.9:/var/lib/heketi/mounts/vg_72c0f520b0c57d807be21e9c90312f85/brick_ac6add804a6a467cd81cd1404841bbf1/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
You can see how the data is replicated across 3 GlusterFS bricks. Let’s pick one of them (ideally the host you are logged on to and look at the directory contents):
# ll /var/lib/heketi/mounts/vg_67314f879686de975f9b8936ae43c5c5/brick_f264a47aa32be5d595f83477572becf8/brick
total 180300
-rw-r-----. 2 1000070000 2001       56 Mar 24 12:11 auto.cnf
-rw-------. 2 1000070000 2001     1676 Mar 24 12:11 ca-key.pem
-rw-r--r--. 2 1000070000 2001     1075 Mar 24 12:11 ca.pem
-rw-r--r--. 2 1000070000 2001     1079 Mar 24 12:12 client-cert.pem
-rw-------. 2 1000070000 2001     1680 Mar 24 12:12 client-key.pem
-rw-r-----. 2 1000070000 2001      352 Mar 24 12:12 ib_buffer_pool
-rw-r-----. 2 1000070000 2001 12582912 Mar 24 12:20 ibdata1
-rw-r-----. 2 1000070000 2001 79691776 Mar 24 12:20 ib_logfile0
-rw-r-----. 2 1000070000 2001 79691776 Mar 24 12:11 ib_logfile1
-rw-r-----. 2 1000070000 2001 12582912 Mar 24 12:12 ibtmp1
drwxr-s---. 2 1000070000 2001     8192 Mar 24 12:12 mysql
-rw-r-----. 2 1000070000 2001        2 Mar 24 12:12 mysql-1-h4afb.pid
drwxr-s---. 2 1000070000 2001     8192 Mar 24 12:12 performance_schema
-rw-------. 2 1000070000 2001     1676 Mar 24 12:12 private_key.pem
-rw-r--r--. 2 1000070000 2001      452 Mar 24 12:12 public_key.pem
drwxr-s---. 2 1000070000 2001       62 Mar 24 12:20 sampledb
-rw-r--r--. 2 1000070000 2001     1079 Mar 24 12:11 server-cert.pem
-rw-------. 2 1000070000 2001     1676 Mar 24 12:11 server-key.pem
drwxr-s---. 2 1000070000 2001     8192 Mar 24 12:12 sys
You can see the MySQL database directory here. This is how it’s stored in GlusterFS backend and presented to the MySQL container as a bind-mount. If you check your mount table on the OpenShift VM you will see the GlusterFS mount.

Summary

What we have done here is create a simple but functional GlusterFS storage pool outside of OpenShift. This pool can grow and shrink independently of the applications. The entire lifecycle of this pool is managed by a simple front-end known as heketi which only needs manual intervention when the deployment grows. For daily provisioning operations it’s API is used via OpenShifts dynamic provisioner, eliminating the need for Developers to interact with Infrastructure teams directly.
This is how we bring storage into the DevOps world – painless, and available directly via developer tooling of the OpenShift PaaS system.
GlusterFS and OpenShift run across all foot-prints: bare-metal, virtual, private and public cloud (Azure, Google Cloud, AWS…) ensuring application portability and avoiding cloud provider lock-in.Happy Glustering your Containers!(c) 2017 Keith Tenzer

Deploying CloudForms in the Azure Cloud

Standard

redhat-ms

Overview

In this article we will deploy the CloudForms appliance in the Azure cloud. CloudForms is a cloud management platform based on the opensource project manageiq. Red Hat bought manageiq a few years back and opensourced the software. Originally it was designed to manage VMware but over the years has expanded to many additional traditional as well as cloud platforms. You can use this article as reference for both CloudForms and ManageIQ.CloudForms can connect to many cloud providers such as RHEV (Red Hat Enterprise Virtualization), VMware, Hyper-V, OpenStack, Amazon Web Services (AWS), Google Cloud Engine (GCE) and Azure. Large organizations don’t have one cloud but many and in addition typically have on-premise, off-premise as well as public. All of these various platforms creates a lot of complexity if not managed right. CloudForms can create a bridge between traditional (mode 1) and cloud native (mode 2) workloads, offering applications a path to switch between these modes. In addition, CloudForms allows an IT organization to act as a cloud broker between the various public platforms. Finally CloudForms can be used to automatically deploy and manage applications across the various cloud platforms. Businesses have choice in platform, flexibility and speed while IT can manage it all centrally applying common policies or rule-sets regardless of where workloads are running.

Prepare

Red Hat provides CloudForms as a appliance. The appliances each have various formats, depending on the platform. For Microsft Hyper-V and Azure, Red Hat provides a Virtual Hard Disk (VHD). The vhd is a dynamic disk. Azure unfortunately does not support dynamic disks but only fixed disks. In order to import CloudForms appliance in Azure we need to convert the appliance vhd to fixed disk. In addition the vhd will be fixed size of around 40GB. To prevent having to upload 40GB and just the actual data which is closer to 2GB we will use several tools. You can of course use Powershell, using the Azure cmdlets. If you are a Linux guy like me though, that isn’t an option. Thankfully Microsoft has provided a tool written in Go that works great for uploading disks to Azure. In addition Microsoft provides a CLI similar to functionality of Powershell written in python.Convert VHD from dynamic to fixedThe first question you might have is why? Well Red Hat doesn’t want you to have to download a 40GB image so they provide a dynamic disk. In the next steps we will take that image, convert to fixed disk and upload to Azure, ignoring the zero’ed blocks.Convert image to raw using qemu tools
# qemu-img convert -f vpc -O raw cfme-azure-5.7.0.17-1.x86_64.vhd cfme-azure-5.7.0.17-1.raw
Calculate fixed image size
$sudo rawdisk="cfme-azure-5.7.0.17-1.raw"
$sudo vhddisk="cfme-azure-5.7.0.17-1.vhd"
$sudo MB=$((1024*1024))
$sudo rounded_size=$((($size/$MB + 1)*$MB))
$sudo size=$(qemu-img info -f raw --output json "$rawdisk" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
$sudo rounded_size=$((($size/$MB + 1)*$MB))
Resize image
$sudo qemu-img resize -f raw "$rawdisk" $rounded_size
$sudo qemu-img convert -f raw -o subformat=fixed,force_size -O vpc "$rawdisk" "$vhddisk"
Download Azure VHD ToolsAs mentioned the Azure VHD tools are written in Go so you need to first install Go. I installed version 1.7.4.
$ gunzip go1.7.4.linux-amd64.tar.gz 
$ tar xvf go1.7.4.linux-amd64.tar 
$ cd go
Export Environment Parameters
 $ mkdir $HOME/work
 $ export GOPATH=$HOME/work
 $ export PATH=$PATH:$GOPATH/bin
Install VHD Tools
 $ go get github.com/Microsoft/azure-vhd-utils
Upload the CloudForms Fixed Disk to Azure
$ ./azure-vhd-utils upload --localvhdpath /home/ktenzer/cfme-azure-5.7.0.17-1.vhd --stgaccountname <storage account> --stgaccountkey <storage key> --containername templates --blobname cfme-azure-5.7.0.17-1.vhd --parallelism 8
Once the upload completes you can deploy the CloudForms Appliance in Azure. In order to do this we will use the Azure CLI which is python based.Install Python and dependencies
$ sudo yum install python
$ sudo yum install python-pip
$ sudo yum install python-devel
$ sudo yum install openssl-devel
$ sudo yum install npm
Install Azure CLI
$ npm install azure-cli -g
$ sudo npm install azure-cli -g
$ sudo pip install --upgrade pip
$ sudo pip install azure==2.0.0rc5
Deploy CloudForms ApplianceUsing the Azure CLI deploy CloudForms Appliance
$ azure vm image create cfme-azure-5.7.0.17-1 --blob-url https://premiumsadpdhlose2disk.blob.core.windows.net/templates/cfme-azure-5.7.0.17-1.vhd --os Linux /home/ktenzer/cfme-azure-5.7.0.17-1.vhd
Note: you can also use the Azure portal UI to create VM once the image is uploaded.

Configure CloudForms in Azure

Once the CloudForms Appliance is deployed you can access it using username/password or ssh-key depending on what you chose when creating VM in Azure.
$ ssh ktenzer@<CloudForms Appliance IP>
Run the appliance console
$ sudo appliance_console

Welcome to the CFME Virtual Appliance.

To modify the configuration, use a web browser to access the management page.

Hostname: CFME
IP Address: 10.0.0.9
Netmask: 255.255.255.0
Gateway: 10.0.0.1
Primary DNS: 168.63.129.16
Secondary DNS: 
Search Order: v1tt5gv0hqjudfkkftdccsm4cc.ax.internal.cloudapp.net
MAC Address: 00:0d:3a:28:0c:27
Timezone: Europe/Berlin
Local Database Server: running (primary)
CFME Server: running
CFME Database: localhost
Database/Region: vmdb_production / 1
External Auth: not configured
CFME Version: 5.7.0.17


Press any key to continue.
Advanced Setting

1) Set DHCP Network Configuration
2) Set Static Network Configuration
3) Test Network Configuration
4) Set Hostname
5) Set Timezone
6) Set Date and Time
7) Restore Database From Backup
8) Configure Database
9) Configure Database Replication
10) Configure Database Maintenance
11) Logfile Configuration
12) Configure Application Database Failover Monitor
13) Extend Temporary Storage
14) Configure External Authentication (httpd)
15) Update External Authentication Options
16) Generate Custom Encryption Key
17) Harden Appliance Using SCAP Configuration
18) Stop EVM Server Processes
19) Start EVM Server Processes
20) Restart Appliance
21) Shut Down Appliance
22) Summary Information
23) Quit

Choose the advanced setting:
Verify the network configuration, hostname and timezone.Configure database
Configure Database

Database Operation

1) Create Internal Database
2) Create Region in External Database
3) Join Region in External Database
4) Reset Configured Database

Choose the database operation:
Create an internal database, choosing default optionsOnce database has been created and instantiated start the EVM processes from main menu. Once that completes you can access the CloudForms Appliance UI.
https://<CloudForms Appliance IP>
username: admin
password: smartvm

Summary

In this article we explored how to deploy the CloudForms appliance in the Azure cloud. CloudForms can provide insights into many cloud providers and provide a single-pane for administrating an organizations various cloud platforms. This enables an IT organization to take the role of cloud broker, providing an organization flexibility between various platforms without giving up control and oversight. I hope you found this article of use. If you have additional feedback or comments please let me know.Happy Clouding in Azure!(c) 2017 Keith Tenzer

The Disruptive Seagull

Standard
Plenty of discussion during a recent Red Hat sales event around both innovation and operation. What happens after the innovation stops (and yes innovation continues, but the thing you’ve already innovated needs to be operated), and you need to get the benefit from innovation you need to operate it to get the value of the investment back.   We’ve seen plenty of ‘seagull’ consulting in the past, where someone flies in, rubbishes lots of existing systems, proposes new ideas and then disappears before they are used and where the problems start. This isn’t the way to innovate.DevOps and innovation by its very nature is disruptive and whilst its needed in many cases to cut through organisational bureaucracy there is a danger that innovation and its constant churn may end up being counter-productive in that  some form a steady state is needed to operate the environment that has been built.  Too much disruption, no stability and you have the danger of being seen as a seagull. This has always been a criticism of consultants who come in and build a system or install a product and then leave on Day 1 of operation. Day 2 and beyond can be left to the customer, either at their insistence (cost, time are reasons) or that it’s not the consultants job.   It’s also the danger of only looking at new, greenfield applications rather the far harder existing estate when thinking of making innovations.  No suggesting you need to tackle the really hard stuff head-on, but one eye should be kept on the overall estate even if just working with a small part of it.Red Hat has always seen itself as a disruptive player in the market and to some extent it still is, though its found this balance between being the cheeky, noisy player on the block, as well as understanding the enterprise market.  Disruptive players in IT, whether this is bloggers, sales people or other individuals sometimes use ‘shock tactics’ to scare people in making significant changes in thinking (and then organisation, process and technology) and to some extent the current dash for microservices might be in part.There is no need to break too much and go too far with innovation and disruption and there is a balance. Rather than blowing up the whole organisation, just take the doors off.Any organisation has its own level of tolerance for change, or particularly the velocity of change. Most understand the need to change and that too much bureaucracy is now hindering development and operation. However, sensitivities and culture mean that changes may need to be gradual and effective, showing improvement. In particular talking about change and throwing ideas into the mix, need to be shown to be beneficial when the system is operated and in use. Change therefore needs to take into account medium to long term operation and deal with the difficult areas around people, culture and organisation, not just technology and methodology.Ready to Innovate provides an indicator where change is needed, either step-by-step or holistically.  Whilst it’s not called Ready to Operate, partly because this would exclude developers by naming convention, it is a measure of how people would use the system, both to develop and to deliver IT solutions to the business.  RTI can be used to measure the entire organisation or just a part of it.However, anyone working with organisation should be mindful of day two (or the day after they leave), when true operation starts. If the new normal is constant innovation when the process of that innovation needs to be defined and able to be sustained.How not be to be a seagull:
  • think about operation, that is by developers and operations teams, for the medium to long term
  • be realistic, make proposals and suggestions that can be achieved by the organisation
  • have empathy. You need to think like the rest of the organisation and be in tune to what’s acceptable and what’s not in terms of change

Using to Ready to Innovate (RTI)

Standard
Ready to Innovate is a tool that can be used to assess an organisations readiness for digital transformation, DevOps and Platform-as-a-Service (PaaS) in around 30 minutes and then identify the next steps. Originally developed by Red Hat Consulting in Europe as a tool to help account teams to understand their customers and challenges, it can be used internally in the organisation to do the same thing.The source code can be found on github https://github.com/boogiespook/spider and it’s been updated and developed on a regular basis.The aim of RTI is to provide a simple assessment of current state and challenges that an organisation will face beyond the technology.  There are 5 ‘alphas’ [1] that can be measured in any organisation with reference to Digital Transformation
  • automation : tooling, mechanisms for automation of procedures and processes for IT operation and development
  • methodology : frameworks and approaches (ITIL, Agile etc) being used by IT organisations
  • architecture : high level understanding and future direction of organisation wide architecture
  • strategy : digital transformation strategy and the role of IT in the wider organisation
  • resources : people and organisation within the IT departments, their motivation, skills and aptitude
Each of the ‘alphas’ is measured for both Development and Operations side of the organisation (where possible) using a set of criteria rate between 0 and 4. The more mature (and therefore ready to innovate) the higher the rating. Completing the assessment using the online tool, or via a Google sheet provides:
  • a visual representation of the maturity
  • a high level review of the key points and challenges facing the organisation
  • some recommended next steps
 The tool has been developed by Red Hat Consultings’ Emerging Technology Group in Europe as a way that teams working with customers can easily and quickly get a view on the key issues facing them as part of a Digital Transformation process. It’s simplistic, but it’s visualisation is useful and its a way of communicating what the future looks like.The tool itself is part of any assessment, not the complete solution. The information needs to be understood and acted upon, which needs experience and insight of people using the tool.[1] Alphas is a term used by SEMAT, http://www.semat.org  to describe the things you work with in any project. I’ve used the same term here.

Organisational change and cognitive dissonance

Standard
When we discuss the barriers of cultural change within an organisation, we tend to focus on the organisational processes rather than the beliefs, opinons and values of the individuals within the organisation.  We all have internalised values and beliefs which are based on many factors including gender, sex, level of education, socio-economic group, age religion and many others.  If we are to provide all-embracing change within an organisation, we need to be aware of these additional factors which are predominately formed outside of the workspace but can cause cognitive dissonance for the individuals involved.Cognitive what? Ok – a bit of psychology 101.   If our internal values are compromised or challenged by something, we can feel discomfort or stress as we try to deal with the contradictions or inconsistencies. We need harmony between our internal beliefs and tasks we carry out in our working life because our natural reaction is to try and eliminate anything which upsets our physiological status quo and achieve internal consonance.  The easiest solution is to just dismiss anything external which causes this dissonance hence the reluctance to change without understanding the benefits behind the proposed change.A very simplified example of this in the technical industry could be an engineer who is passionate about open source software but has been forced to use proprietary software as it has been deemed to be the new standard within the organisation.  How could this person resolve the conflict? One answer would be to engage more in open source development during their own time thus re-aligning the balance.  This is similar to somebody who knows that eating a big plate of fried food for lunch isn’t particularly healthy but by spending an extra half an our at the gym that evening, they can even the debt!  This is classed as justifing the cognition by adding new cognition.  The previous example of engaging more in Open Source out of work is an example of justifying the behaviour by changing the conflicting cognition.The next big question is how reduce people’s cognitive dissonance when trying to introduce a new culture to teams within an organisation.  This is a good segue back to my mention of Open Source software.  It’s not only the software itself where open source can bring benefits to an organisation wishing to breakdown internal silos but also the methodologies and techniques which lurk behind it.  Some of these concepts are:
  • Open exchange of ideas. We can learn more from each other when information is open.
  • Collaborative participation. When we are free to collaborate, we create.
  • Rapid prototyping. Rapid prototypes can lead to rapid failures, but that leads to better solutions found faster
  • Transparency. Opening data sources for all. Within an organisation, this could be initiated by given read only access to all code and processes across all projects (while keeping security in mind). This would then allow for code reuse and better collaboration among team members.
  • Meritocracy. In a meritocracy, the best ideas win. In a meritocracy, everyone has access to the same information
  • Communities. Communities are formed around a common purpose. A collaborative community can create beyond the capabilities of any one individual.
This post is aimed at trying to understand why people can be resistant to change but not provide all the answers as every change is different.  Organisation change works best when the people who will be making the changes are involved from the outset and are invited to participate in the evolution of change in an iterative way (not a “Big Bang” approach).  If decisions are imposed on people by senior managers with no discussion or reasoning why they should adapt, people will feel left out and irrelevant to the change process – this is not a good starting point for organisational change.Why not start by seeding ideas with the organisation then looking for ambassadors/champions internally? You may not need to engage with external (and possibly very expensive) business change consultants if you can find the people already within the company who can craft buy-in from colleagues.  If you do like this method then don’t forget to give people time and suitable credit for their work!There is no one size fits all for organisations but hopefully this will give you some food for thought.   

OpenShift Enterprise 3.4: all-in-one Lab Environment

Standard

Screenshot from 2016-08-04 14:40:07

Overview

In this article we will setup a OpenShift Enterprise 3.4 all-in-one configuration.OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system a load balancer or ha-proxy won’t be configured. If you would like to read more about OpenShift I can recommend the following:

Prerequisites

Configure a VM with following:
  • RHEL 7.3
  • 2 CPUs
  • 4096 RAM
  • 30GB disk for OS
  • 25GB disk for docker images
# subscription-manager repos --disable="*"
# subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.4-rpms"
# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
# yum update -y
# yum install -y atomic-openshift-utils
# yum install atomic-openshift-excluder atomic-openshift-docker-excluder
# atomic-openshift-excluder unexclude
# yum install -y docker
# vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16'
# cat <<EOF > /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdb
VG=docker-vg
EOF
# docker-storage-setup
# systemctl enable docker
# systemctl start docker
# ssh-keygen
# ssh-copy-id -i /root/.ssh/id_rsa-pub ose3-master.lab.com
#vi /etc/hosts

192.168.122.60  ose3-master.lab.com     ose3-master
# systemctl reboot
 

Install OpenShift.

Here we are enabling ovs-subnet SDN and setting authentication to use htpasswd. This is the most basic configuration as we are doing an all-in-one setup. For actual deployments you would want multi-master, dedicated nodes and seperate nodes for handling etcd.
#Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
ose3-master.lab.com

# host group for nodes, includes region info
[nodes]
ose3-master.lab.com openshift_schedulable=True
Run Ansible playbook to install and configure OpenShift.
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Configure OpenShift

Create local admin account and enable permissions.
[root@ose3-master ~]#oc login -u system:admin -n default
[root@ose3-master ~]#htpasswd -c /etc/origin/master/htpasswd admin
[root@ose3-master ~]#oadm policy add-cluster-role-to-user cluster-admin admin
[root@ose3-master ~]#oc login -u admin -n default
Configure OpenShift image registry. Image streams are stored in registry. When you build application, your application code will be added as a image stream. This enables S2I (Source to Image) and allows for fast build times.
[root@ose3-master ~]#oadm registry --service-account=registry \
--config=/etc/origin/master/admin.kubeconfig \
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}'
Configure OpenShift router. The OpenShift router is basically an HA-Proxy that sends incoming service requests to node where pod is running.
[root@ose3-master ~]#oadm router router --replicas=1 \
    --credentials='/etc/origin/master/openshift-router.kubeconfig' \
    --service-account=router

Summary

In this article we have seen how to configure an OpenShift 3.4 all-in-one lab environment. We have also seen how install and configuration can be adapted through ansible playbook. This environment is intended to be for a Lab and as such no best practices are given in regards to OpenShift. If you have any feedback please share.Happy OpenShifting!(c) 2016 Keith Tenzer

Treffen der internationalen OpenShift Community in Berlin

Standard
Am 28. März trifft sich die internationale OpenShift Community im Rahmen des OpenShift Commons Gathering und der KubeCon im Berliner Kongress Centrum. Auch dieses Jahr werden wieder viele hochkarätige Speaker und Kunden aus dem kompletten europäischen Raum vor Ort sein:Speaker Highlights
  • Alexis Richardson (Weave): The Next Chapter for Cloud Natives & Kubernetes
  • Aparna Sinha (Google): Kubernetes 1.6 and Beyond
  • Brandon Philips (CoreOS): Upstream This! Panel
  • Chris Wright (Red Hat): Challenges of Digital Transformation
  • Clayton Coleman (Red Hat): OpenShift 3.3; Features, Functions, Future
  • Dr. André Baumgart (easiER AG): Healthcare Goes Mobile on OpenShift at EasierAG
  • Eric Mountain (Amadeus): OpenShift at Amadeus
  • Robert Forsstrom (Volvo): OpenShift at Volvo
  • Thomas Weber (T-Systems): Big Data on OpenShift at T-Systems
  • Vincent Batts (Red Hat): The State of the Container Ecosystem
JETZT ANMELDEN…

Do ITIL and Agile play nicely together?

Standard
There are many discussions within the industry regarding the integration of ITIL (Ops) and Agile (Dev) and it is possible that they can both coexist within an organisation. One of the main obstacles in this integration is the fact that ITIL is a sequential framework whereas agile development involves more iterative processes where MVPs are produced and updated with a much shorter release cadence (the “Release early and release often concept”). I’m not going to delve too deeply into the “tankers vs speedboats” analogy but you could use it to explain the differences between ITIL and Agile.  ITIL travels along at a known speed with a pre-determined location whereas Agile development shoots around at speed in lots of directions … sometimes in the wrong direction using up valuable fuel but this is part of the learning curve of people piloting speedboats.  Ok – no more strange analogies!The core ITIL processes such as incident and problem management and process improvement will continue to be used and will provide value as they are understood operational models with a good pedigree but ITIL was written ~10 years ago which is before agile became prevalent in enterprise organisations. It is very easy to knock ITIL as an outdated framework but there are ~2 million ITIL practiioners worldwide so we can’t just get rid of it.  The ITIL Service Design volume supports iterative and incremental design, and mentions Agile so they are not mutually exclusive.So can we take a best of breed approach between the two seemingly different approaches? I believe the answer is yes but it does involve a lot of change within corporate cultures which can be a slow process in enterprise/global organisations and we see that people are instinctively reluctant to change.  Where does this reluctance to change come from? Some people believe that increased automation (as required for pure Agile development) will lead to a reduction in staff but in reality the exact opposite is true.  Automating tasks removes the mundane and time consuming work carried out by technical staff and allows them to innovate with new idea and thought leadership around the automation process – somebody still needs to write the automation code!As an example, instead of using a qualified Ops engineer to manually patch servers once a week/month/quarter/blue moon, why not let them design the architecture and processes for automated patching? I’m sure they would find it much more interesting (I know I would!) and they can then own and manage that process.  They can still follow the ITIL processes for change management etc but through automated processes.  Jira, Git, Jenkins, Gerrit et al can all be integrated which could significantly reduce time to live for fast-paced application development.Thinking of tools, some believe that tools are the way to attain a good DevOps/Agile environment but this isn’t the case.  The tools are only one part of the troika which need to be addressed – the other two are people and processes. I agree that you do need an agreed set of tools but these should be adaptable and flexible so people and processes aren’t locked into the tools used within an organisation.Just some quick musings for my first post on Ready To Innovate.  All comments welcome! 

[Short Tip] Call Ansible or Ansible Playbooks without an inventory

Standard
Ansible LogoAnsible is a great tool to automate almost anything in IT. However, one of the core concepts of Ansible is the inventory where the to be managed nodes are listed. However, in some situations setting up a dedicated inventory is overkill.For example there are many situation where admins just want to ssh to a machine or two to figure something out. Ansible modules can often make such SSH calls in a much more efficient way, making them unnecessary – but creating a inventory first is a waste of time for such short tasks.In such cases it is handy to call Ansible or Ansible playbooks without an inventory. In case of plain Ansible this can be done by  addressing all nodes while at the same time limiting them to an actual hostslist:
$ ansible all -i jenkins.qxyz.de, -m wait_for -a "host=jenkins.qxyz.de port=8080"
jenkins.qxyz.de | SUCCESS => {
    "changed": false, 
    "elapsed": 0, 
    "path": null, 
    "port": 8080, 
    "search_regex": null, 
    "state": "started"
}
The comma is needed since Ansible expects a list of hosts – and a list of one host still needs the comma.For Ansible playbooks the syntax is slightly different:
$ ansible-playbook -i neon.qxyz.de, my_playbook.yml
Here the “all” is missing since the playbook already contains a hosts directive. But the comma still needs to be there to mark a list of hosts.
Filed under: Ansible, Cloud, Debian & Ubuntu, Fedora & RHEL, Linux, Microsoft, Shell, Short Tip, SUSE, Technology