Ansible community modules for Oracle DB & ASM

Ansible LogoBesides the almost thousand modules shipped with Ansible, there are many more community modules out there developed independently. A remarkable example is a set of modules to manage Oracle DBs.The Ansible module system is a great way to improve data center automation: automation tasks do not have to be programmed “manually” in shell code, but can be simply executed by calling the appropriate module with the necessary parameters. Besides the fact that an automation user does not have to remember the shell code the modules are usually also idempotent, thus a module can be called multiple times and only changes something when it is needed.This only works when a module for the given task exists. The list of Ansible modules is huge, but does not cover all tasks out there. For example quite some middleware products are not covered by Ansible modules (yet?). But there are also community modules out there, not part of the Ansible package, but nevertheless of high quality and developed actively.A good example of such 3rd party modules are the Oracle DB & ASM modules developed by oravirt aka Mikael Sandström, in a community fashion. Oracle DBs are quite common in the daily enterprise IT business. And since automation is not about configuring single servers, but about integrating all parts of a business process, Oracle DBs should also be part of the automation. Here the extensive set of Ansible modules comes in handy. According to the README (shortened):
  • oracle_user
    • Creates & drops a user.
    • Grants privileges only
  • oracle_tablespace
    • Manages normal(permanent), temp & undo tablespaces (create, drop, make read only/read write, offline/online)
    • Tablespaces can be created as bigfile, autoextended
  • oracle_grants
    • Manages privileges for a user
    • Grants/revokes privileges
    • Handles roles/sys privileges properly.
    • The grants can be added as a string (dba,’select any dictionary’,’create any table’), or in a list (ie.g for use with with_items)
  • oracle_role
    • Manages roles in the database
  • oracle_parameter
    • Manages init parameters in the database (i.e alter system set parameter…)
    • Also handles underscore parameters. That will require using mode=sysdba, to be able to read the X$ tables needed to verify the existence of the parameter.
  • oracle_services
    • Manages services in an Oracle database (RAC/Single instance)
  • oracle_pdb
    • Manages pluggable databases in an Oracle container database
    • Creates/deletes/opens/closes the pdb
    • saves the state if you want it to. Default is yes
    • Can place the datafiles in a separate location
  • oracle_sql
    • 2 modes: sql or script
    • Executes arbitrary sql or runs a script
  • oracle_asmdg
    • Manages ASM diskgroup state. (absent/present)
    • Takes a list of disks and makes sure those disks are part of the DG. If the disk is removed from the disk it will be removed from the DG.
  • oracle_asmvol
    • Manages ASM volumes. (absent/present)
  • oracle_ldapuser
    • Syncronises users/role grants from LDAP/Active Directory to the database
  • oracle_privs
    • Manages system and object level grants
    • Object level grant support wildcards, so now it is possible to grant access to all tables in a schema and maintain it automatically!
I have not yet had the change to test the modules, but I think they are worth a look. The amount of quality code, the existing documentation and also the ongoing development shows an active and healthy project, development important and certainly relevant modules. Please note: these modules are not part of the Ansible community, nor part of any offering from Oracle or anyone else. So use them at your own risk, they probably will eat your data. And kittens!So, if you are dealing with Oracle DBs these modules might be worth to take a look. And I hope they will be pushed upstream soon.
Filed under: Ansible, Business, Cloud, Linux, Shell, Technology

Red Hat Ceph Storage 2.0 Lab + Object Storage Configuration Guide




Ceph has become the defacto standard for software-defined storage. Ceph is 100% opensource, built on open standards and as such is offered by many vendors not just Red Hat. If you are new to Ceph or software-defined storage, I would recommend the following article before proceeding to understand some high-level concepts:

Ceph – the future of storage

In this article we will configure a Red Hat Ceph 2.0 cluster and set it up for object storage. We will configure RADOS Gateway (RGW), Red Hat Storage Console (RHCS) and show how to configure the S3 and Swift interfaces of the RGW. Using python we will access both the S3 and Swift interfaces.

If you are interested in configuring Ceph for OpenStack see the following article:

OpenStack – Integrating Ceph as Storage Backend


Ceph has a few different components to be aware of: monitors (mons), storage or osd nodes (osds), Red Hat Storage Console (RHSC), RHSC agents, Calamari, clients and gateways.

Monitors – maintain maps (crush, pg, osd, etc) and cluster state. Monitors use Paxos to establish consensus.

Storage or OSD Node – Provides one or more OSDs. Each OSD represents a disk and has a running daemon process controlled by systemctl. There are two types of disks in Ceph: data and journal. The journal enable Ceph to commit small writes quickly and guarantees atomic compound operations. Journals can be collocated with data on same disks or separate. Splitting journals out to SSDs provides higher performance for certain use cases such as block.

Red Hat Storage Console (optional) – UI and dashboard that can monitor multiple clusters and monitor not only Ceph but Gluster as well.

RHSC Agents (optional) – Each monitor and osd node runs an agent that reports to the RHSC.

Calamari (optional) – Runs on one of the monitors to get statistics on ceph cluster and provides rest endpoint. RHSC talks to calamari.

Clients – Ceph provides an RBD (RADOS Block Device) client for block storage, CephFS for file storage and a fuse client as well. The RADOS GW itself can be viewed as a Ceph client. Each client requires authentication if cephx is enabled. Cephx is based on kerberos.

Gateways (optional) – Ceph is based on RADOS (Reliable Atomic Distributed Object Store). The RADOS Gateway is a web server that provides s3 and swift endpoints and sends those requests to Ceph via RADOS. Similarily there is an ISCSI Gateway that provides ISCSI target to clients and talks to Ceph via RADOS. Ceph itself is of course an object store that supports not only object but file and block clients as well.

Red Hat recommends at minimum three monitors and 10 storage nodes. All of which should be physical not virtual machines. For the gateways and RHSC, VMs can be used. Since the purpose of this article is about building a lab environment we are doing everything on just three VMs. The VMs should be configured as follows with Red Hat Enterprise Linux (7.2 or 7.3):

  • ceph1: 4096 MB RAM, 2 Cores, 30GB root disk, 100 GB data disk,
  • ceph2: 4096 MB RAM, 2 Cores, 30GB root disk , 100 GB data disk,
  • ceph3: 4096 MB RAM, 2 Cores, 30GB root disk, 100 GB data disk,

Note: this entire environment runs on my 12GB thinkpad laptop. If memory is tight you can cut ceph2 down to 2048MB RAM.

The roles will be devided across the nodes as follows:

  • Ceph1: RHSC, Rados Gateway, Monitor and OSD
  • Ceph2: Calamari, Monitor and OSD
  • Ceph3: Monitor and OSD


Install Ceph Cluster

Register subscription and enable repositories.

# subscription-manager register
# subscription-manager list --available
# subscription-manager attach --pool=8a85f981weuweu63628333293829
# subscription-manager repos --disable=*
# subscription-manager repos --enable=rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rhceph-2-mon-rpms
# subscription-manager repos --enable=rhel-7-server-rhceph-2-osd-rpms
# subscription-manager repos --enable=rhel-7-server-rhscon-2-agent-rpms

Note: If you are using centos you will need to install ansible and get the ceph-ansible playbooks from github.

Disable firewall

Since this is a lab environment we can make life a bit easier. If you are interested in enabling firewall then follow official documentation here.

#systemctl stop firewalld
#systemctl disable firewalld

Configure NTP

Time synchronization is absolutely critical for Ceph. Make sure it is reliable.

# yum install -y ntp
# systemctl enable ntpd
# systemctl start ntpd

Test to ensure ntp is working properly.

# ntpq -p

Update hosts file

If dns is working you can skip this step.

#vi /etc/hosts ceph1 ceph2 ceph3

Create Ansible User

Ceph 2.0 now uses ansible to deploy, configure and update. A user is required that has sudo permissions.

# useradd ansible
# passwd ansible
#cat << EOF > /etc/sudoers.d/ansible
ansible ALL = (root) NOPASSWD:ALL
Defaults:ansible !requiretty

Enable repositories for RHSC

# subscription-manager repos --enable=rhel-7-server-rhscon-2-installer-rpms
# subscription-manager repos --enable=rhel-7-server-rhscon-2-main-rpms

Install Ceph-Ansible

# yum install -y ceph-ansible

Setup ssh keys for ansible user

# su - ansible
$ ssh-keygen
$ ssh-copy-id ceph1
$ ssh-copy-id ceph2
$ ssh-copy-id ceph3
$ mkdir ~/ceph-ansible-keys

Update Ansible Hosts file

$ sudo vi /etc/ansible/hosts




Update Ansible Group Vars

The ceph configuration is maintained by group vars. By default samples are provided. We need to copy these and then update them. For this deployment we need to update group vars for all, mons and osds.

You can find these group var files in github.

$ cd /usr/share/ceph-ansible/group_vars

Update general group vars

$ cp all.sample all
$ vi all
fetch_directory: /home/ansible/ceph-ansible-keys
cluster: ceph
ceph_stable_rh_storage: true
ceph_stable_rh_storage_cdn_install: true
generate_fsid: true
cephx: true
monitor_interface: eth0
journal_size: 1024
cluster_network: "{{ public_network }}"
osd_mkfs_type: xfs
osd_mkfs_options_xfs: -f -i size=2048
radosgw_frontend: civetweb
radosgw_civetweb_port: 8080
radosgw_keystone: false

Update monitor group vars

$ cp mons.sample mons

Update osd group vars

$ cp osds.sample osds
$ vi osds
osd_auto_discovery: true
journal_collocation: true

Update rados gateway group vars

We are just going with defaults here so no changes.

$ cp rgws.sample rgws

Run Ansible playbook

$ cd /usr/share/ceph-ansible
$ sudo cp site.yml.sample site.yml
$ ansible-playbook site.yml -vvvv

If everything is successful you should see message similar to below. If something fails simple fix problem and re-run playbook till it succeeds.

PLAY RECAP ********************************************************************
 ceph1 : ok=370 changed=17 unreachable=0 failed=0
 ceph2 : ok=286 changed=14 unreachable=0 failed=0
 ceph3 : ok=286 changed=13 unreachable=0 failed=0

Check Ceph Health

You should see HEALTH_OK. If it is not ok then you can run “ceph health detail” to get more information.

$ sudo ceph -s
cluster 1e0c9c34-901d-4b46-8001-0d1f93ca5f4d
health HEALTH_OK
monmap e1: 3 mons at {ceph1=,ceph2=,ceph3=}
election epoch 6, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v26: 104 pgs, 6 pools, 1636 bytes data, 171 objects
103 MB used, 296 GB / 296 GB avail
104 active+clean

Configure erasure coded pool for RADOSGW

By default the pool contains data for a RADOSGW and it is configured for replication not erasure coding. In order to change to erasure coding you need to delete the pool and re-create it. For object storage we usually recommend erasure coding as it is much more efficient and brings down costs.

# ceph osd pool delete --yes-i-really-really-mean-it

Ceph supports many different erasure coding schemes.

# ceph osd erasure-code-profile ls

The default profile is 2+1. Since we only have three nodes this is the only profile that could actually work so we will use that.

# ceph osd erasure-code-profile get default

Create new erasure coded pool using 2+1.

# ceph osd pool create 128 128 erasure default

Here we are creating 128 placement groups for the pool. This was calculated using the pg calculation tool:

Configure rados gateway s3 user

The rados gateway was installed and configured on ceph1 per ansible however a user needs to be created for s3. This is not part of Ansible installation by default.

#radosgw-admin user create --uid="s3user" --display-name="S3user"
 "user_id": "s3user",
 "display_name": "S3user",
 "email": "",
 "suspended": 0,
 "max_buckets": 1000,
 "auid": 0,
 "subusers": [],
 "keys": [
 "user": "s3user",
 "access_key": "PYVPOGO2ODDQU24NXPXZ",
 "secret_key": "pM1QULv2YgAEbvzFr9zHRwdQwpQiT9uJ8hG6JUZK"
 "swift_keys": [],
 "caps": [],
 "op_mask": "read, write, delete",
 "default_placement": "",
 "placement_tags": [],
 "bucket_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 "user_quota": {
 "enabled": false,
 "max_size_kb": -1,
 "max_objects": -1
 "temp_url_keys": []

Test S3 Access

In order to test s3 access we will use a basic python script that uses the boto library.

# pip install boto

Create script and update with information from above. The script is located in github.

# cd /root
# vi

import sys

import boto
import boto.s3.connection
from boto.s3.key import Key

access_key = 'PYVPOGO2ODDQU24NXPXZ'
secret_key = 'pM1QULv2YgAEbvzFr9zHRwdQwpQiT9uJ8hG6JUZK'
rgw_hostname = 'ceph1'
rgw_port = 8080
local_testfile = '/tmp/testfile'
bucketname = 'mybucket'

conn = boto.connect_s3(
	aws_access_key_id = access_key,
	aws_secret_access_key = secret_key,
	host = rgw_hostname,
	port = rgw_port,
	calling_format = boto.s3.connection.OrdinaryCallingFormat(),

def printProgressBar (iteration, total, prefix = '', suffix = '', decimals = 1, length = 100, fill = '#'):
    percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total)))
    filledLength = int(length * iteration // total)
    bar = fill * filledLength + '-' * (length - filledLength)
    print('\r%s |%s| %s%% %s' % (prefix, bar, percent, suffix))
    if iteration == total: 

def percent_cb(complete, total):
    printProgressBar(complete, total)

bucket = conn.create_bucket('mybucket')
for bucket in conn.get_all_buckets():
	print "{name}\t{created}".format( name =, created = bucket.creation_date,)

bucket = conn.get_bucket(bucketname) 

k = Key(bucket)
k.key = 'my test file'
k.set_contents_from_filename(local_testfile, cb=percent_cb, num_cb=20)

Change permissions and run script

# chmod 755

Watch the ‘ceph -s’ command.

# watch ceph -s
Every 2.0s: ceph -s Thu Feb 2 19:09:58 2017

cluster 1e0c9c34-901d-4b46-8001-0d1f93ca5f4d
 health HEALTH_OK
 monmap e1: 3 mons at {ceph1=,ceph2=,ceph3=}
 election epoch 36, quorum 0,1,2 ceph1,ceph2,ceph3
 osdmap e102: 3 osds: 3 up, 3 in
 flags sortbitwise
 pgmap v1543: 272 pgs, 12 pools, 2707 MB data, 871 objects
 9102 MB used, 287 GB / 296 GB avail
 272 active+clean
 client io 14706 kB/s wr, 0 op/s rd, 32 op/s wr

Configure rados gateway swift user

In order to enable access to object store using swift you need to create a sub-user or nested user for swift access. This user is created under already existing user. We will use the s3user already created. From outside the swift user is it’s own user.

# radosgw-admin subuser create --uid=s3user --subuser=s3user:swift --access=full
"user_id": "s3user",
"display_name": "S3user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
"id": "s3user:swift",
"permissions": "full-control"
"keys": [
"user": "s3user",
"access_key": "PYVPOGO2ODDQU24NXPXZ",
"secret_key": "pM1QULv2YgAEbvzFr9zHRwdQwpQiT9uJ8hG6JUZK"
"swift_keys": [
"user": "s3user:swift",
"secret_key": "vzo0KErmx5I9zaE3Y7bIOGGbJaECpJmNtNikFEYh"
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
"temp_url_keys": []

Generate keys

This is done by default but in case you want to generate keys you can do so at anytime.

# radosgw-admin key create --subuser=s3user:swift --key-type=swift --gen-secret

Test swift access

We will use python again but this time for swift cli which is written in python.

# pip install --upgrade setuptools
# pip install python-swiftclient

List buckets using swift

# swift -A -U s3user:swift -K 'DvvYI2uzd9phjHNTa4gag6VkWCrX29M17A0mATRg' list

If things worked then you should see bucket called ‘mybucket’.

Configure Red Hat Storage Console (RHSC)

Next we will configure the storage console and import the existing cluster. Ansible does not take care of setting up RHSC.

Install RHSC

# yum install -y rhscon-core rhscon-ceph rhscon-ui

Configure RHSC

# skyring-setup
Would you like to create one now? (yes/no): yes
 Username (leave blank to use 'root'):
 Email address:
 Password (again):
 Superuser created successfully.
 Installing custom SQL ...
 Installing indexes ...
 Installed 0 object(s) from 0 fixture(s)
 ValueError: Type carbon_var_lib_t is invalid, must be a file or device type
 Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/carbon-cache.service.
 Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
 Please enter the FQDN of server []:
 Skyring uses HTTPS to secure the web interface.
 Do you wish to generate and use a self-signed certificate? (Y/N): y

 Now the skyring setup is ready!
 You can start/stop/restart the server by executing the command
 systemctl start/stop/restart skyring
 Skyring log directory: /var/log/skyring
 URLs to access skyring services

Once installation is complete you can access RHSC via web browser.
user: admin password: admin

Configure RHSC Agent

Each ceph monitor and osd node requires an RHSC agent.

Install agent

Likely ansible took care of this but doesn’t hurt to check.

# yum install -y rhscon-agent
# curl | bash

Setup calamari server

The calamari server runs on one of the monitor nodes. It’s purpose is to collect cluster health, events and statistics. In this case we will run the calamari server on ceph2.

# yum install calamari-server
# calamari-ctl clear --yes-i-am-sure

Gotta love the –yes-i-am-sure flags.

# calamari-ctl initialize --admin-username admin --admin-password admin --admin-email

Accept nodes in RHSC


Click on tasks icon at top right and accept all hosts.


Once accepting hosts you should see them available.


Import cluster in RHSC

Go to cluster and import a cluster, ensure to select the monitor node running the calamari server. In this case ceph 2.

Note: we could have also deployed Ceph using RHSC that is what new cluster does but that is no fun,


Click import to start the import process.


Cluster now should be visible in RHSC.


RHSC dashboard shows a high-level glimpse of ceph cluster.



In this article we saw how to deploy a ceph 2.0 cluster from scratch using VMs that can run on your laptop. We enabled object storage through ragosgw and configured both s3 as well as swift access. Finally we setup the Red Hat Storage Console (RHSC) to provide insight into our ceph cluster. This article provides you a great starting point for your journey into ceph and the future of storage which is software-defined as well as object based. Unlike all other storage systems ceph is the only one that is opensource, built on open standards and truly unified (object, block, file). Ceph is supported by many vendors and can run on any x86 hardware, even commodity. What else is there left to say really? Those of you who know me, know there is some extra meaning in the phrase: KEEP CALM AND RELEASE THE KRAKEN!

Happy Cephing!

(c) 2017 Keith Tenzer



OpenShift 3.3 and later contain the functionality to route pod traffic to the external world via a well-defined IP address. This is useful for example if your external services are protected using a firewall and you do not want to open the firewall to all cluster nodes.

The way it works is that a egress pod is created which creates a macvlan interface inside the pod’s network namespace connected to the default network. Traffic which is sent to the pod’s IP address is then forwarded to a specific destination IP (EGRESS DESTINATION) via the macvlan interface:

You would typically front such an egress pod with a service declaration so that you do not have to hardcode the pod’s IP address (on the internal OpenShift SDN) but can simply reference it via the service name and resolve it using DNS.

An additional feature is the ability to define an egress policy per openshift project / kubernetes namespace. Here you can explicitly state which (external) IPs a pod may access and which ones not.



In the OpenShift world, Services take place on the OSI Layer 3 / IP, while Routing is an OSI Layer 7 / HTTP/TLS concept. Once you’ve wrapped your head around this backwards choice of naming, things are fairly easy:

An OpenShift Router is a component which listens on a physical host’s HTTP/S ports for incoming connections and proxies them into the OpenShift SDN. This allows clients who are NOT part of the OpenShift SDN (probably most of them) to access Services provided in OpenShift. The hostname of the HTTP request or the TLS Service Name Indication triggers certain forwarding rules which determine which Pod IP Addresses are targeted. Note that it does not address the service IP address, but uses the same information as the service to target the pods directly. Thus the selection logic behind a service is the single source of truth defining which pods are accessed.

Just like the kube-proxy and the SkyDNS service, the Router listens continuously for events which signify OpenShift configuration changes, and adapt the forwarding rules accordingly. The router itself is implemented as OpenShift pod and uses the “host port” concept that was already introduced in the plain docker networking to expose port 80/443 on the physical network. Note that this can be built upon to allow custom loadbalancing solutions, integrate with existing load balancers, etc.

This adds the additional flows (single node for simplicity again):

  • From external client to OpenShift Pod: external client → network → OpenShift Node eth0 → (IPTables DNAT) → tun0 → ovs br0 → vethXXXX → Pod A eth0 → (userspace router) → Pod A eth0 → vethXXXX → ovs br0 → vethYYYY → Pod B eth0

But of course applies also to scenarios spanning multiple nodes:



To allow stable endpoints in an environment of ever changing starting and stopping Pods (and therefore constantly changing IP addresses), Kubernetes introduces (and OpenShift uses) the concept of services. Services are stable IP addresses (taken per default from the subnet) that remain the same as long as the service exists.

Connection requests to a service are forwarded to a pod which matches the service’s selector. A selector is a predicate that describes which pods to target based on labels (key/value pairs) applied to a pod. While the discussions of labels and selectors is outside of this paper, the mechanism how the forwarding takes place is not. Here, a component called “kube-proxy” comes into play. Just like the SkyDNS service on the masters, it listens on the cluster configuration database for changes (e.g. pods matching the selector starting or stopping) and adjusts the forwarding rules.

Again there are two methods how this forwarding can work in detail, and it is a cluster-wide configuration which one is used:

  • User-space mode: Here IPTables rules are used to forward packages destined to the service IP address to the kube-proxy, who will in turn initiate connections to the actual destination IP and proxy between the two endpoints. Key advantage of user-space mode is that it is able to detect non-responding pods and retry connection to other pods.
  • IPTables mode: Here the kube-proxy continuously updates the hosts’ IPTables rules to forward packets directly to one of the target pod’s IP address. Key advantage of this mode is the increased throughput.

Both methods default to a round-robin distribution when more than 1 pod is available, but allow for session affinity based on the client’s IP address. Of course, this works the same no matter if the target pod is on the same node or a different node.

OpenShift services add the following flows (using a single node example for simplicity):

  • From a Pod to another Pod (on the same node) via Service / Usermode : PodA eth0 → vethXXXX → (ovs) br0 → tun0 → IPTables NAT → kube-proxy → tun0 → (ovs) br0 → vethYYYY → PodB eth0
  • From a Pod to another Pod (on the same node) via Service / IPTables : PodA eth0 → vethXXXX → (ovs) br0 → tun0 → IPTables NAT → tun0 → (ovs) br0 → vethYYYY → PodB eth0

Additional Benefit: Resolving service names via DNS

Also, the OpenShift DNS configuration resolves the service name into its IP address for all callers from the same namespace. This means that from a pod, you can simply access “http://foo” to connect to the “foo” service in the same namespace. This works because Kubernetes launches the pods so that the DNS configuration will point to the master nodes with a default search domain “.<pod_namespace>.cluster.local”. A SkyDNS service on the masters listens on configuration changes in the OpenShift database and updates its resolution tables whenever the OpenShift service configuration changes.



So far, this sounds like a lot of effort to achieve a little more than a plain docker host – containers that can talk to each other and to the host network, potentially segregated based on kubernetes namespace. However OpenShift SDN also allows pods on different nodes to communicate with each other.

To this end, it establishes VXLAN tunnels to the various OpenShift Nodes. VXLAN tunnels all layer2 traffic over IP via UDP port 4789. The vxlan0 device is connected to the br0 ovs bridge and can from there reach all pods and containers on the same node. Where the multitenant SDN plugin used ovs flow keys to segregate network traffic on the br0, is uses VXLAN virtual network IDs to separate traffic on the wire.

This capability does not extend to plain docker containers, i.e. they cannot communicate with either pods or other plain docker containers on another node. This means plain docker containers are limited to communicate with other containers and pods running on the same node as well as the any host connected to the physical network(s).

Inter-Node networking therefore adds the following flow:

Between pods on different nodes: PodA eth0 → vethXXXX → (ovs) br0 → vxlan0 (L3 encapsulation) → (tunnel via host network) → vxlan0 (L3 deencapsulation) → br0 → vethYYYY → Pod eth0

See also: OpenShift SDN Networking:

[Short Tip] Retrieve your public IP with Ansible

Ansible LogoThere are multiple situations where you need to know your public IP: be it that you set up your home IT server behind a NAT, be it that your legacy enterprise business solution does not work properly without this information because the original developers 20 years ago never expected to be behind a NAT.Of course, Ansible can help here as well: there is a tiny, neat module called ipify_facts which does nothing else but retrieving your public IP:
$ ansible localhost -m ipify_facts
localhost | SUCCESS => {
    "ansible_facts": {
        "ipify_public_ip": ""
    "changed": false
The return value can be registered as a variable and reused in other tasks:
- name: get public IP
  hosts: all 

    - name: get public IP
      register: public_ip
    - name: output
      debug: msg="{{ public_ip }}"
The module by default accesses to get the IP address, but the api URL can be changed via parameter.
Filed under: Ansible, Cloud, Debian & Ubuntu, Fedora & RHEL, Linux, Microsoft, Shell, Short Tip, SUSE, Technology

How to set up wordpress on OpenShift in 10 minutes


What this is about?

A lot of customers would like to give the brave new container world (based on Docker technology) a try with real life workload. The WordPress content management system (yes, it has become more than a simple blog) seems to be an application that many customers know and use (and that I’ve been asked for numerous times). From a technical point of view the WordPress use case is rather simple, since we only need a PHP runtime and a database such as MySQL. Therefore it is a perfect candidate to pilot container aspects on OpenShift Container Platform.


Install Container Development Kit

I highly recommend to install the freely available Red Hat Container Development Kit (shortly CDK). It will give you a ready to use installation of OpenShift Container Platform based on a Vagrant image. So you’re up to speed in absolutely no time:

Please follow the installation instructions here:

Setup resources on OpenShift

Spin up your CDK environment and ssh into the system:

vagrant up
vagrant ssh

Create a new project and import the template for an ephemeral MySQL (since this is not included in the CDK V2.3 distribution by default). If you prefer to use another database or even one with persistent storage, then you can find additional templates here.

oc new-project wordpress
oc create -f

Now we create one pod for our MySQL database and create our WordPress application based on the source code. OpenShift will automatically determine that it is based on PHP and will therefore choose the PHP builder image to create a Docker image from our WordPress source code.

oc new-app mysql-ephemeral
oc new-app
oc expose service wordpress

Now let’s login to the OpenShift management console and see what has happened:

We now have a pod that runs our WordPress application (web server, PHP, source code) and one pod running our ready to use ephemeral (= non-persistent) MySQL database.

Install wordpress

Before we need to note down the connection settings for our MySQL database. Firstly we look up the cluster IP of our mysql service; secondly we look up the database name, username & password. Have a look at the following screenshots:

Now it is time to setup and configure wordpress. Simply click on the route that has been created for your wordpress pod (in my case the hostname is “”).

Congratulations for installing WordPress on OpenShift!

What’s next

For now we’ve created all the resources manually in a not yet reusable fashion. Therefore one of the next steps could be to create a template from our resources, import it into the OpenShift namespace and make it available for our users as a service catalog item. So our users could provision a fully installed WordPress with the click of a button.

Installing and Configuring Red Hat Satellite 6 via shell script





I would like to have a satellite 6.newest server to showcase common Satellite usecases. So the idea is to have a quick but flexible way to set up a Satellite 6.x server from scratch.


The idea is to get most parts somehow automated (but still not very flexible). In this blog i’m relying on shell scripting for the main Satellite configuration part, as the well known “hammer” command is a shell command. The script will stop when run into an error. If the error is fixed and you rerun the script it will skip over any task fulfilled already. There are some pre-configuration tasks done through Ansible though.

The server we set up is “owning” it’s own subnet, on which dhco, tftp and domain name resolution is fully under control of the satellite server. This means, there will be no conflicts e.g. with network wide dhcp servers.


Important Words on name resolution:

Satellite 6 needs to have a forward and reverse records for it’s own hostname (bound to the interface of the deployment network) in place before starting satellite-install in order to get the certificates set up correctly.

You will want to achieve the following:


  1. assure DNS forward and reverse lookup for the hostname of your server
  2. assure the resolved ip corresponds to the interface which points to the network of the clients
  3. assure no other ip address will resolve to the same name
  4. assure no other name resolves to the same ip address

If you do not take this serious enough, you might run into trouble when rolling out some servers. They might not register with subscription management. (internal server error).

The Demo Satellite server, will have it’s own (very private) network interface (for deployment) delivering DNS, DHCP, tfp and so forth for that network.

The Satellite-DNS server will resolve it’s own name and be master over the whole network configuration. satellite-install will be given all directives to configure DNS zones and dhcp ranges.

This brings us into a chicken-egg issue:

The dns-server will be in place after satellite-install has run and only thereafter you can add the records for the satellite itself, but satellite-install also needs the dns in place before hand otherwise the certificates will be wrong.

The approach which should work:

  1. on the satellite host
    1. put it’s hostname and the private ip in /etc/hosts
    2. set up resolv.conf to work with official nameserver (otherwise subscription-manager and yum won’t work)
  2. install satellite-packages and install satellite (with satellite-insall sript)
    1. this should bring your local DNS server for your (private) domain and your subnet
    2. the local DNS-server should forward request to official nameservers
  3. add satellite host to your local DNS server
  4. reconfigure your resolver to only ask localhost
  5. continue with satellite configuration (mostly via hammer)

System to deploy on

The system we deploy on can be virtual or physical. It needs connectivety to the Red Hat CDN and needs a “local” subnet”

I installed RHEL 7.2 on a virtual machine .

Memory: 16 GB are recommended for Satellite.


I set up a default sized VM which brings 100 GB Boot-Volume. As Satellite is very disk-space hungry i added a second 400 GB volume – meant for /var/lib.

Networking interfaces

I have a private VLAN attached to the VM with a subnet Which can be used to deploy hosts and where i am able to run dhcp et all. The satellite nevertheless needs an interface in the rhevm – network to be reached from the outside world.

I configured the server to use dhcp (address only) on eth0 on rhevm network.

And i configured a static ip on the private VLAN. Important: eth1 must deactivate “default route”

installation/configuration of OS

I specified to boot from RHEL 7.2 DVD iso. In the grub menu i added to the default install item the following parameter:


which pulls in some definitions on how to install, so that things get kind of unattanded (except of providing the parameter).

Hint: You need to retype this as i found no way to cut&paste into the RHEV-provided VM-Console…

But it does save quit some manual work and prevents mistakes.

For reference:
I set up filesystems as follows:

/ 50 GB
/home no separate volume, no separate filesystem
/var/lib 250 GB on separate volume group on separate volume leaving 150 GB for growth)

Networking interfaces

I set up eth0 to use dhcp and eth1 to have a static ip on my private vlan.

Disabled ipv6 on all devices. Disabled change of resolv.conf via dhcp.

Config needs to looks like this in the end:

eth0: DHCP (address only), attach automatically
eth1: manual, attach automatically
     IP: (note it is .3 now!)
no DNS configured

Looking at the files (and correcting them if neccessary):

[root@msisat62 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0


[root@msisat62 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

Preparation – done through Ansible

Disclaimer: With this project i used /etc/ansible as main project path, which i would not do so today and which i would not recommend. As paths are coded in on many places i did not want to change this without good testing. I therefor leave the unpleasant paths the way they work.

[mschreie@mschreie coe]$ ssh-copy-id root@msisat62

I added to Ansibles inventory file ( /etc/ansible/hosts):

msisat62 ansible_user=root

As i do grep the output of some commands it is essential to be in the expected language. In /etc/ansible/ansible.cfg i assured this:

#module_lang = C
module_lang = en_US.UTF-8

I stored the secrets on how to reach cdn in a separate encrypted vault file.

[root@mschreie coe]# ansible-vault create secret.yml
cdnuser: myusername
cdnpass: mypassword

The Playbook itself can be downloaded like this:

[mschreie@mschreie ~ ]$ cd /etc/ansible/coe/
[mschreie@mschreie coe]$ wget
[mschreie@mschreie coe]$ wget
[mschreie@mschreie coe]$ cd /etc/ansible/templates/
[mschreie@mschreie templates]$ wget
[mschreie@mschreie templates]$ cd /etc/ansible/coe/

I then ran:

[mschreie@mschreie coe]$ ansible-playbook -vv satellite_install.yml --ask-vault-pass
Using /etc/ansible/ansible.cfg as config file
Vault password:
Loaded callback default of type stdout, v2.0

Installation – shell script part

I used the Book of Adrian Bradshaw Introduction · Getting Started with Satellite 6 Command Line  to set up my satellite server. I put all commands in a shell script. Up to now this script is very unspectacular: no big intelligence or algorithm inside.

I added some mechanism for logging and to stop at error – so that the script won’t mess up any further. These mechanisms work pretty well but are not tested throughout. Feedback welcome😉

I’ve separated Script and configuration:

First you find the configuration:

[root@msisat62 ~]# wget

and the script itself:

[root@msisat62 ~]# wget

Please find some explanation:

  1. Commands really changing setup are wrapped with a doit – function (as mentioned above)
  2. This function puts each correctly executed command in a donefile. The doit function also exits the script when a command returned with error. This gives you the chance to correct the issue before everything is messed up. When rerunning the script will skip all commands found in the donefile. You can safely rerun the script and it continues exactly where it stopped before.
  3. all output should be seen on the screen and in a logfile called $0.log

To run the script simply call:

[root@msisat62 ~]# vi
[root@msisat62 ~]# bash

Explanation on the satellite-installer cmd:

satellite-installer --scenario satellite \
   --foreman-proxy-dhcp true \
   --foreman-proxy-dhcp-interface eth1 \
   --foreman-proxy-dhcp-range "$RANGEFROM $RANGETO" \
   --foreman-proxy-dhcp-nameservers "$DNSSERVER" \
   --foreman-proxy-dns true \
   --foreman-proxy-dns-forwarders "$DNSFORWARDERS" \
   --foreman-proxy-dns-interface $SATINTERFACE \
   --foreman-proxy-dns-zone "$DNSDOMAIN" \
   --foreman-proxy-dns-reverse "$DNSREVERSDOM" \
   --foreman-proxy-tftp true \
   --katello-proxy-url= \
   --katello-proxy-port=3128 \
   --enable-foreman-plugin-openscap \

I choose the satellite-installer cmd-line as seen in the script.

It is wise to analyze the logs /var/log/katello-installer/…log and unfortunately i did not catch the admin credentials. Therefor i ran (also part of the script):

[mschreie@mschreie coe]$ foreman-rake permissions:reset
Reset to user: admin, password: FYdURRwgxAqbYD5N

and put the new credentials into /root/.hammer/cli_config.yml to use hammer without passing any credentials (also part of the script). To log in on Web-UI you need to look up the current password in /root/.hammer/cli_config.yml.

Note: I wanted to set the timezone but the timezone-module from ansible was not on my notebook…. It is now, but i did not update the script yet.

Checking what i did
I need to check that dhcp / dns are somehow what i expected:

[mschreie@mschreie coe]$ cat /etc/named.conf
[mschreie@mschreie coe]$ less /etc/named/options.conf
[mschreie@mschreie coe]$ cat /etc/zones.conf
[mschreie@mschreie coe]$ less /etc/dhcp/dhcpd.conf
[mschreie@mschreie coe]$ dig @localhost
[mschreie@mschreie coe]$ dig @localhost AXFR
[mschreie@mschreie coe]$ dig @localhost
[mschreie@mschreie coe]$ dig @localhost

Manual tweaking

DNS records

I did the DNS changes inside the script already. Nothing to do here anymore.

DNS resolv.conf

same here – this is corrected through the script.

enable content in activation keys

As you know the activation key contains a content-view. All Repositories of the CV are available through the AK. But some Repositories default to “not enabled”. You can then enable them on the server with “subscription-manager repos” cmd. I prefere having them enabled per default.

I did not manage to get the right hammer-command in place yet.

The direction might be:

[root@msi-sat62 ~]# hammer activation-key list
[root@msi-sat62 ~]# hammer activation-key info --id 1  --organization "$ORG"  << you do not see the repositories here :-(
[root@msi-sat62 ~]# hammer activation-key product-content --id 1 --organization "$ORG"
ID   | NAME                                                   | TYPE | URL | GPG KEY | LABEL                                  | ENABLED?
4831 | Red Hat Satellite Tools 6.2 (for RHEL 7 Server) (RPMs) |      |     |         | rhel-7-server-satellite-tools-6.2-rpms | 1      
2455 | Red Hat Enterprise Linux 7 Server (Kickstart)          |      |     |         | rhel-7-server-kickstart                | default
2472 | Red Hat Enterprise Linux 7 Server - RH Common (RPMs)   |      |     |         | rhel-7-server-rh-common-rpms           | default
2456 | Red Hat Enterprise Linux 7 Server (RPMs)               |      |     |         | rhel-7-server-rpms                     | default

[root@msi-sat62 ~]# hammer activation-key content-override --content-label rhel-7-server-rh-common-rpms --value 1 --id 1 --organization "$ORG"
Updated content override

FixMe: Needs to be automated and added in the script.

First Deployment

I created a VM via RHEVM-WebUI:

  • small
  • nic VLAN 101
  • 50 GB thin provisioned disk, bootable
  • Boot sequence: PXE, Hard Disk

I noted the mac adress: 00:1a:4a:7f:6a:38

And run

[root@msi-sat62 ~]# hammer host create --hostgroup "$HG1" \
   --name "msi-provisiontest1" --mac "00:1a:4a:7f:6a:38" \
   --root-password "redhat00" \
   --organization "${ORG}" --location "${LOC}"

Host created

This host deployed charmingly well.


We are now able to set up a Satellite Server 80% in an automated way. The result will be a Satellite 6.2 up and running and able to provision an existing server.

There are still quite some pitfalls and i believe all this needs quite some tweeking.

Update: I’m now working on an “Ansible only” solution to set up a Satellite Server but thought at least the script might help someone.

My personal look at the German eID system (“Neuer Personalausweis”)


Business Problem

Many business processes in Germany involve paper (or better TONS OF PAPER!) and surely many manual steps: think of opening a bank account or registering a car at your local “Zulassungsstelle”. In my opinion one of the main reasons for that is that the identity of a user cannot be properly verified online. You could now argue that things like video identification or Deutsche Post PostIdent came up to address this problem. However this only solves part of the problem, since the signature still needs to be done manually.

In Germany the so called nPA (neuer Personalausweis) is able to solve this problem by providing a qualified signature. So you will be able to digitally sign contracts online. Therein lays the potential to completely transform tons of paper-based processes. And huge amounts of time and money could be saved as well!


Use cases of the eID system

The nPA has two main functions “Identification with Online-Ausweisfunktion” and “Electronic Signature”, which allow to implement many exciting use cases. These range from simple verifications (like age check, address validation) to login mechanisms for websites (the nPA can be considered as a single-sign-on system in this context). Moreover the nPA also allows to apply a qualified digital signature to documents, which is equal to a genuine signature (according to German law).

Since its launch in 2010 a couple of federal institutes and enterprises have made their services ready for the nPA:

  • ElsterOnline (German tax)
  • Rentenkonto online (German pension fund)
  • Punkteauskunft aus dem Verkehrszentralregister (VZR)
  • UrkundenService
  • Allianz Kundenportal

A complete list of applications can be found here: However, from my perception the adoption still leaves a lot of room for improvement.

Architectural overview

There is extensive documentation available which describes the technical architecture behind the eID system (personally I recommend the information from BSI found here: That it why I do not want to go into the nitty gritty details.

However, to give you a rough understanding have a look at the following illustration, which looks similar to what is available in token based authentication systems (think of SAML and/or OpenID Connect concepts). There is something like a service provider (“WebServer”) who wants to protect a service; then an authority that is able to validate the identity (“eID-Server”); and a login component (“AusweisApp”) that allows the end user to enter login information like a PIN. Last but not least the user must have a card reader connected to his local system, which talks to the login component (“AusweisApp”).


It is important to understand that the login component (“AusweisApp”) is implemented as a standalone application, which must run on the user’s computer (and of course be installed beforehand). For 2017 it is planned to release mobile versions of the app (see Google Play Store) in order to use a mobile device as a card reader. In my opinion this will help to reduce the overall complexity from an end user’s perspective.

When looking at the system from a service provider’s point of view (e.g. I am an online shop provider who wants to enable users to login with their nPA), you have to consider a lot of things. Since their is neither a public instance of the “eID-Server” nor source code available, you have two options: create your own implementation based on the BSI spec or buy the service from a provider. Additionally you will have to think of how to integrate the token into your application: since there is no “reference implementation” of the “eID-Server” spec there is little to no documentation available. Overall the process feels rather complex and intransparent to me.

A detailed description of the application process can be found here: “Become Service Provider”.


The opportunity behind the German eID system is really huge and could speed up lots of processes and make all of our lives easier. But in my opinion there are a lot of things hindering the adoption and success of the system:

  1. There is no public eID-Server instance that can be used by public and private institutions. This makes the adoption unnecessarily complicated because all service providers have to find a solution for themselves.
  2. Little documentation for service providers available. Instead only tons of specs available that need a lot of work lifted by the service provider.
  3. Many services require that you map your eID to the identity in their system (at least once). This makes the process very uncomfortable for the end user.
  4. Currently an external card reader is needed. Firstly it has to be bought by the end user and secondly this does not work on the go. Fortunately this caveat has already been addressed with the mobile app version.

My final thoughts: the adoption cannot be forced by laws. Instead, I think that the eID system should be developed in a more transparent and community based manner. Moreover the integration by service providers should be as easy as putting a social login on my personal website.