jBPM 6.5.0.Final available

While we have been working on jBPM v7 for quite a while now, we still wanted to deliver a few more features that were requested by users.

You can find all information here:

Ready to give it a try but not sure how to start?  Take a look at the jbpm-installer chapter.

So on top of a bunch of bug fixes, you can expect the following new features:

Core Engine

Process instance migration

Process instance migration allows you to upgrade an already active process instance to a newer version of the process definition (than the one it was started with). Optionally it allows to perform node mapping of active node instances within process instance (to accommodate for use cases where currently active nodes might have changed).  The jBPM services have been extended with a new more powerful API and the same functionality is available remotely through the kie-server API.
JMS interaction patterns
When using the remote API of our kie-server, the JMS version now also supports different interaction patterns (on top of the request-response already supported):
  • fire and forget
  • asynchronous with callback
Task variables support in listeners

Added operations to easily get access to task variables from within task listeners.

Remote API improvements for deployments

Additional operations have been added to the remote API to simplify integration: operations to get deployment information of your projects based on their group, id and/or version (GAV).

Process Designer

Improved automation importing service tasks in Designer

You can import custom service tasks from a service repository into Designer so they can be used in your process, like for example Twitter, FTP, etc. The workbench now automates a lot of the additional configuration as well:
  •     Installs the service configuration (wid) into the users Workbench project
  •     Installs the service icon (defined in the service configuration)
  •     Installs the service maven dependencies into the project POM
  •     Installs the service default handler into the project Deployment Descriptor
Using start up parameters, you can also register a default service repositories and even install service tasks by default for new projects. More details are available in the documentation.


You can now also perform copy/paste operations across different processes.


Using workbench and kie-server together

Various small improvements allow you to use the workbench together with (one or more) kie-server execution servers to manage your process instances and tasks (sharing the same underlying datasource). As a result, processes and task created on one of the execution servers can now be managed in the workbench UI as well.

The jbpm-installer is now configured out-of-the-box to have a managed kie-server deployed next to it where you can deploy your processes to as well.

Support for enums in data modeler
The data modeler now supports selecting enums as the type when defining the parameters of a data object.


Various components have been added / upgraded:
  • Upgraded to WildFly 10
  • Added support for EAP 7
  • Upgraded to Spring 4
The jbpm-installer now uses WildFly 10.0.0.Final as the default. 
Enjoy !

London JBug: v7 Roadmap (November 22nd)

On November 22nd, we will be doing a JBug in London where we will be showing (live) what's on our roadmap for v7 for Drools, jBPM, Optaplanner etc.

This will include for example details on some of our key initiatives:
  • Case Management
  • The new Process Designer 
  • Our new Rich Client Platform
  • Improved Forms and Page building 
  • Improved Advanced Decision Tables and new Decision Model Notation
  • Fully integrated DashBuilder reporting 
  • New OptaPlanner features & performance improvements 

If you're interested, please register here.
Apparently there will be beer and pizza as well :)

Other team members will try to set up similar JBugs in a location near you, so stay tuned for more info if you would like to have a sneak peak as well!

Enterprise Container Platform in the Cloud: OpenShift on Azure secured by Azure AD


msazurelogo plus_sign openshiftlogo


This article is a collaboration from Rolf Masuch (Microsoft) and Keith Tenzer (Red Hat). It is based on our work together in the field with enterprise customers.

In this article we will explore how to deploy a production ready OpenShift enterprise container platform on the Microsoft Azure Cloud. The entire deployment is completely automated using Ansible and ARM (Azure Resource Manager). Everything is template driven using APIs. The bennefit of this approach is the ability to build-up and tear-down a complete OpenShift environment in the Azure cloud before your coffee gets cold.

Since OpenShift already uses Ansible as its installation and configuration management tool, it made sense to stick with Ansible as opposed to using other tools such as Power Shell. A Red Hat colleague, Ivan McKinley created an Ansible playbook that builds out all the required Azure infrastructure components and integrates the existing OpenShift installation playbook. The result is an optimally configure OpenShift environment on the Azure Cloud. We have used this recipe to deploy real production Environments for customers and it leverages both Microsoft as well as Red Hat best practices.

You can access and contribute improvements to the Ansible playbook under Ivan’s Github repository:


The following, related articles might also be of Interest in case you want a basic understanding of OpenShift.

The pre-requisites for deploying OpenShift on Azure are a valid OpenShift subscription and a valid Azure subscription.

  • If you don’t already have a OpenShift Subscription you can purchase one or get an eval by talking to your partner or Red Hat account manager.
  • If you don’t already have a Microsoft Azure Subscription you can start one here.

Deploying to Azure

Install Fedora 24 workstation for use as deployment workstation. You need very recent versions of python 2.7 (2.7.12) and unfortunately it isn’t available in RHEL or CentOS at writing of this article so we used Fedora.

Install Python and Dependencies

# sudo yum install python
# sudo yum install python-pip
# sudo dnf install python-devel
# sudo dnf install redhat-rpm-config
# sudo dnf install openssl-devel

Install Azure CLI

# sudo dnf install npm
# sudo npm install azure-cli -g
# sudo pip install --upgrade pip
# sudo pip install azure==2.0.0rc5

Authenticate Azure CLI

[ktenzer@ktenzer ansible-azure]$ azure login
 info: Executing command login
 \info: To sign in, use a web browser to open the page https://aka.ms/devicelogin. Enter the code CB8P5ZCKP to authenticate.
 -info: Added subscription Pay-As-You-Go
 info: Added subscription ITS - RedHat Openshift
 info: Setting subscription "Pay-As-You-Go" as default
 info: login command OK

List Azure Resource Groups

In order to list resource groups you need your Azure subscription id. You can view this by logging into Azure portal with your user.

[ktenzer@ktenzer ansible-azure]$ azure group list --subscription <subscription id>
 info: Executing command group list
 + Listing resource groups
 data: Name Location Provisioning State Tags:
 data: ------------- ---------- ------------------ -----
 data: OpenShift_POC westeurope Succeeded null
 data: Shared westeurope Succeeded null
 info: group list command OK

Install Ansible Core

# sudo dnf install ansible

Clone OpenShift Azure Ansible Playbooks

# git clone https://github.com/ivanthelad/ansible-azure.git

Update Playbook parameters

# cd ansible-azure
# cp group_vars/all_example group_vars/all
# vi group_vars/all
resource_group_name: <new resource group name>
## Azure AD user.
ad_username: <Azure user e.g. keith.tenzer@domain.onmicrosoft.com>
### Azure AD password
ad_password: <Azure Password>
#### Azure Subscription ID
subscriptionID: "<subscription id from Azure>"
## user to login to the jump host. this user will only be created on the jumphost
adminUsername: <username e.g. ktenzer>
## user pwd for jump host
## Password for the jump host
adminPassword: <password>
##### Public key for jump host
### Access to environment only allowed through jumphost
sshkey: <ssh key e.g. cat /home/ktenzer/.ssh/id_rsa.pub>

# see https://azure.microsoft.com/en-us/documentation/articles/cloud-services-sizes-specs/
### Size for the master
master_vmSize: Standard_DS3_v2
#master_vmSize: Standard_D2_v2
#master_vmSize: Standard_D1_v2

### Size for the nodes
node_vmSize: Standard_DS3_v2
#node_vmSize: Standard_D2_v2
#node_vmSize: Standard_D1_v2

#### Region to deploy in
region: westeurope

## docker info
docker_storage_device: /dev/sdc
create_vgname: docker_vg
filesystem: 'xfs'
create_lvsize: '80%FREE'
#create_lvsize: '2g'

#### subscription information
rh_subscription_user: <Red Hat Subscription User>
rh_subscription_pass: <Red Hat Subscription Password>
openshift_pool_id: <Red Hat Subscription Pool Id>

########### list of node ###########
### Warning, you currently cannot create more infra nodes ####
### this will change in the future
### You can add as many nodes as you want
 name: jumphost1
 region: westeurope
 zone: jumphost
 stage: jumphost

 name: master1
 region: westeurope
 zone: infra
 stage: none
 name: master2
 region: westeurope
 zone: infra
 stage: none
 name: master3
 region: westeurope
 zone: infra
 stage: none

 name: infranode1
 region: westeurope
 zone: infra
 stage: dev
 name: node1
 region: westeurope
 zone: app
 stage: dev
 name: node2
 region: westeurope
 zone: app
 stage: dev

Run Ansible Playbook

# ansible-playbook --forks=50 -i inventory.azure playbooks/setup_multimaster.new.yml

Connect to OpenShift environment

In order to connect to OpenShift environment you need to access jump box. The public IP of jumpbox will be set during playbook run, simply look at outputs to get the public IP for jumpbox.

Connect to jumphost

# ssh -i /home/ktenzer/.ssh/id_rsa.pub ktenzer@<jumphost public IP>

Connect to master1

There are three masters and you can connect and manage environment from any of them.

[ktenzer@jumphost1 ~]$ ssh master1

Login as built-in system:admin user

[ktenzer@master1 ~]$ oc login -u system:admin

List OpenShift nodes

[ktenzer@master1 ~]$ oc get nodes
NAME                                                       STATUS AGE
infranode1.KgsZ98734738nshjdsj2.ax.internal.cloudapp.net   Ready  34d
master1.KgsZ98734738nshjdsj2.ax.internal.cloudapp.net      Ready 34d
master2.KgsZ98734738nshjdsj2.ax.internal.cloudapp.net      Ready 34d
master3.KgsZ98734738nshjdsj2.ax.internal.cloudapp.net      Ready 34d
node1.KgsZ98734738nshjdsj2.ax.internal.cloudapp.net        Ready 34d
node2.KgsZ98734738nshjdsj2.ax.internal.cloudapp.net        Ready 34d

The deployment defaults to a highly available three master node OpenShift cluster. It also deploys three nodes, one for infrastructure and other two for applications. Ideally you would want a second infrastructure node so infrastructure services such as routing, image registry, logging and metrics are also highly available. Adding additional nodes is simply a matter of updating the playbook and re-running it. In addition changing OpenShift configuration is also just matter of updating playbook and re-running it.

In order to access the UI get the public URL from the master configuration file. When accessing the public URL traffic is balanced across all three OpenShift master servers.

[root@master1 ~]# cat /etc/origin/master/master-config.yaml |grep masterPublicURL 
masterPublicURL: https://master-<Resource Group>.westeurope.cloudapp.azure.com:8443

Azure AD Authentication

OpenShift like most platforms has two layers that govern access to the environment. Authentication grants a user access to the platform and authorization gives a user privileges within the environment. OpenShift supports many authentication providers. In the case of Azure AD openId is used for authentication and OpenShift offloads authentication entirely to Azure AD. Any user permitted to access the OpenShift authentication application in Azure AD is permitted to login to the OpenShift environment. Once a user in Azure AD authenticates to OpenShift, privileges can be given at project level to allow the user access within the environment. We will first configure Azure AD by setting up an authentication application and then allowing access within Azure. Once that is configured we will go through steps to allow OpenShift to use the Azure AD authentication application. Finally we will show how user management works within OpenShift when using Azure AD as authentication provider.

Configure Azure AD Application for OpenShift

The following steps describe how to create an Azure WebApp as your authentication barrier, before accessing the OpenShift Admin portal.

Since you already have installed OpenShift into Azure the normal disclaimers about getting a subscription do not apply here. In case you are just reading through and want to do more in Azure this is the link: http://www.Azure.com

Create Application

To create the application, log in to the Azure portal https://portal.azure.com and click on the plus symbol (1), then on Web + mobile (2) and finally on Web App (3).


This will open another part on the screen that needs to be filled with further details about the new Web App. These parts are called blades and are used in the Azure portal to present details and configuration options. The first box takes (1) the application name that will become part of the DNS Name of the application. It needs to be unique within the namespace of azurewebsites.net. Your subscription in the following dropdown should be pre-filled. The next option, Resource Group, should be set to“Use existing” from the dropdown (2) and should point to the resource group where your OpenShift installation resides. Special attention is needed for the App Service plan and its location (3). The default location may need adjustment. Click on the tile to configure your service plan. After that click on create at the bottom of the blade to create your Web App. You may want to tick the checkbox “Pin to dashboard” for easier access later.


Enable Authentication for Application

When your application has been deployed click on it to access its blade and continue with the configuration by clicking on Authentication/Authorization in the settings.


The right side of the blade will change and you can click on the button for App Service Authentication (1) to turn it on. The dropdown for the Authentication providers should show “Log in with Azure Active Directory” (2). Click on the tile below (3) to configure the Azure Active Directory details.


In the Azure Active Directory Settings dialog select the Management mode option (1) “Express”. The Current Active Directory (2) should read your own Azure Active Directory name. In the lower part of the dialog the button “Create New AD App” is pre-selected and you can provide an additional application name under “Create App”. By default it can be the same as the Web App name. After that click on OK at the bottom of the blade.



Don’t forget to click on Save at the top of the blade.


Now that the Web App part is finished you can close all the blades and return to the dashboard. The next steps are done in the new Azure Active Directory blade.


Configure Azure AD for Application

When you click on it the new blade should open with your existing Azure Active Directory details and the tile App registrations should read “1”. Click on that tile to continue.


Click on Endpoints in order to retrieve the Azure Tenant Id. It is the number after the https://login.windows.net part. Take note of the tenant id as it is needed in the OpenShift configuration. Click on the Application you configured in the step before to access its details and configuration options.


In the settings blade of your app click on Properties (1) to access the applications Application ID. You need this information as well to configure OpenShift to make use of the Azure AD authentication.



Close the Properties blade and click on Keys (2) in the Settings blade to create an individual access key. This key is also needed for the OpenShift configuration. Add a key description (1), select a duration of one or two years or even unlimited live time under (2) and click on save. Pay special attention to the warning displayed about the key that is now shown in the value (4). Copy that key and verify that it is stored properly somewhere else before leaving that blade!



In general, when you add the gathered information into OpenShift, all access to the URL of the installation will now be verified through Azure Active Directory authentication. In case you want to limit the usage to a certain group of users or even individual users you can configure this also but currently only in the Classic Azure portal under https://manage.windowsazure.com.

You may have to sign out and sign in again to access the portal. Once you have signed in, scroll to the Azure Active directory icon and click on the name of your Azure AD to access the dashboard.


Enable User Access to Applications Azure AD

Click on APPLICATIONS (1), search (2) for your application in case there are more than one and click on your applications name (3) to go to its properties.


In the properties click on “CONFIGURE” and toggle the button “USER ASSIGNMENT REQUIRED TO ACCESS APP” to “YES” and click on “SAVE” at the bottom of the page.



Now that the application is toggled users and/or groups need to be assigned to it. You configure that under “USERS AND GROUPS” on the same page.


Select one of your users, click on assign at the bottom of the page and confirm the dialog. Verify the access by opening a new browser window in private mode and navigate to the OpenShift URL. You should be prompted with an authentication dialog from your Azure AD.




Configure OpenShift for Azure AD Authentication

OpenShift supports many identity providers. In this case we are using OpenId to authenticate users against Azure AD. In order to configure Azure AD authentication in OpenShift the following information from Azure AD is required:

  • Tenant Id – The tenant id from any of the endpoint URLs
  • Client Id – The Application Id from Azure AD Application
  • Client Secret Key – The key that was created for the Application in Azure AD

Configure OpenId Provider

This information should have been captured from the steps above. On all three OpenShift masters edit the master configuration and add openId provider.

[root@master1 ~]# vi /etc/origin/master/master-config.yaml
 - name: openId
 challenge: false
 login: true
 mappingMethod: claim
 apiVersion: v1
 kind: OpenIDIdentityProvider
 clientID: <client id>
 clientSecret: <client secret key>
 - sub
 - preferred_username
 - name
 - email
 authorize: https://login.microsoftonline.com/<tenant id>/oauth2/authorize
 token: https://login.microsoftonline.com/<tenant id>/oauth2/token

Restart OpenShift on all masters.

[root@master1 ~]# systemctl restart atomic-openshift-master-api
[root@master1 ~]# systemctl restart atomic-openshift-master-controllers
[root@master2 ~]# systemctl restart atomic-openshift-master-api
[root@master2 ~]# systemctl restart atomic-openshift-master-controllers
[root@master3 ~]# systemctl restart atomic-openshift-master-api
[root@master3 ~]# systemctl restart atomic-openshift-master-controllers

Login as Azure AD user

Using the GUI

Verify the access by opening a new browser window in private mode and navigate to the OpenShift URL https://master-<Resource Group>.westeurope.cloudapp.azure.com:8443. You should be prompted with an authentication dialog of your Azure AD.

Using the CLI

[root@master1 ~]# oc login -u Keith_Tenzer@mydomain.onmicrosoft.com -n default
Login failed (401 Unauthorized)
You must obtain an API token by visiting https://master-<resource group>.westeurope.cloudapp.azure.com:8443/oauth/token/request


[root@master1 ~]# oc login --token=0h9tKLlTicyAr5dYIgW5xiejiMVHIvltEuX6LLVW8CY --server=https://master-<resource group>.westeurope.cloudapp.azure.com:8443
 Logged into "https://master-<resource group>.westeurope.cloudapp.azure.com:8443" as "EYnyqEyOzGFzNCI45feUyvGdEoNapPAD03tg5MWkFXM" using the token provided.

Notice that the user is actually known to OpenShift is “FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ” not  Keith_Tenzer@mydomain.onmicrosoft.com

List OpenShift users

Once Azure AD users login they are automatically added to OpenShift.

[root@master1 ~]# oc get users
 NAME                                        UID                                  FULL NAME    IDENTITIES
 FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ  08bfec9f-79ff-11e6-81bb-000d3a256909 Keith Tenzer openId:FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ

List OpenShift identities

[root@master1 ~]# oc get identity
NAME                                               IDP NAME IDP USER NAME                               USER NAME                                   USER UID
openId:FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ  openId   FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ  FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ  08bfec9f-79ff-11e6-81bb-000d3a256909

Give Azure AD user cluster-admin permission

[root@master1 ~]# oadm policy add-cluster-role-to-user cluster-admin "FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ"

Give user permission to project

[root@master1 ~]# oadm policy add-role-to-user admin "FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ" -n test

Remove Azure AD user from OpenShift

Delete user

[root@master1 ~]# oc delete user FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ

Delete identity

[root@master1 ~]# oc delete identity openId:FGnyqZlyGFzNCI99feUypSdEoNapBAJ03tg5MWlDSZ


In this article we have detailed how to standup a production-ready OpenShift environment automatically, before your coffee gets cold in the Azure Cloud. We have seen how to integrate OpenShift into Azure AD for authentication. There is no doubt, container platforms are the future for applications. Container technology enables application to be developed and deployed at much faster speeds, greatly improving a organizations ability to innovate and connect to it’s market segment. Many enterprises are interested in getting their hands on these technologies and evaluating them rather quickly. There is no faster way then standing up OpenShift in the Azure cloud! We hope you found this article interesting and look forward to your feedback.

Happy OpenShifting in the Azure Cloud!

(c) 2016 Keith Tenzer and Rolf Masuch

Consciously Uncoupling from your Cloud Provider with Ansible


For those that don’t pay attention to Hollywood news – Conscious uncoupling – a five-step process to “end your romantic union in honorable, respectful, and gracious ways” was made popular by Chris Martin and Gwyneth Paltrow. I submit that conscious uncoupling might be something companies should consider when it comes to their cloud provider. After all, it’s not that you aren’t grateful and don’t respect your cloud provider, but why commit to a single cloud provider when there are so many fish in the sea!

All hollywood references aside, uncoupling from a proprietary platform is something the Global Partner Technical Enablement (GPTE) team at Red Hat is undertaking. In 2013, the GPTE team began building a learning platform to allow sales engineers, consultants, and select partners to perform hands on technical training in order to understand how to demonstrate and implement Red Hat’s growing portfolio. The GPTE team began using Ravello Systems in order to deploy virtual environments for trainees. Ravello Systems provided capabilities such as Nested Virtualization, Overlay Networking and Storage that were needed for much of the Red Hat portfolio (particularly the infrastructure technologies) to function properly. The team used Red Hat CloudForms to provide self-service with automatic retirement of applications on the Ravello System.

Fast forward to 2016 and the GPTE team’s Red Hat Product Demo System (RHPDS) runs several hundred applications that are comprised of thousands of virtual machines concurrently. It has been used to teach thousands of Red Hatters and partners in the field how to demonstrate and implement Red Hat’s technologies successfully. However, there have been two key challenges with this system.

First, the Ravello System uses a concept called a blueprint to describe an environment. The blueprint is a concept native to the Ravello System and not something that is supported on any cloud provider. Any logic put into the blueprint is by definition not portable or usable by other teams at Red Hat that don’t use the Ravello system. This runs counter to the culture of open source at Red Hat and does not allow contribution to the demonstrations and training environments to flow back from participants. The team needed to turn participants into contributors.

Second, demonstrations developed on the Ravello system were limited to running only on the Ravello system and could not be deployed in other labs across Red Hat, or even customer environments. This severely limited buy-in from other groups that had their own labs or otherwise felt more comfortable learning in their own way. Many field engineers at Red Hat and partners run OpenStack, oVirt (RHEV), VMware, or other virtualization platform in their labs. These users should be able to deploy demonstration and training environments on the provider of their choice. The team needed to allow re-use of demonstrations and training environments across cloud providers.

The GPTE team wanted to address these two issues in order to increase reuse and spur contribution to the demonstrations themselves from the community of sales engineers. They found an answer in Ansible. By including Ansible in the GPTE platform it will be possible to separate the configuration of the blueprint in the Ravello system from the configuration of the demonstration or training environment. It will also allow automated update of the environments in Ravello any time a code change is made. The result – field engineers can re-use any of the demonstrations and training environments created on the GPTE platform in their own labs and can even share fixes or improvements back. This small change will lead to a greater amount of user acceptance and lower the burden of building and maintaining the technical enablement platform at Red Hat.

If you are interested in learning more about how the GPTE team is consciously uncoupling themselves from a proprietary description and automated the process or if you are interested in deploying the demonstrations in your own lab please check out the Red Hat Demos repository on GitHub where our work is in progress. Contributions welcome!

OpenStack: Integrating Ceph as Storage Backend




In this article we will discuss why Ceph is Perfect fit for OpenStack. We will see how to integrate three prominent OpenStack use cases with Ceph: Cinder (block storage), Glance (images) and Nova (VM virtual disks).

Ceph provides unified scale-out storage, using commodity x86 hardware, that is self-healing and intelligently anticipates failures. It has become the defacto standard for software-defined storage. Ceph being an OpenSource project has enabled many vendors the ability to provide Ceph based software-defined storage systems. Ceph is not just limited to Companies like Red Hat, Suse, Mirantis, Ubuntu, etc. Integrated solutions from SanDisk, Fujitsu, HP, Dell, Samsung and many more exist today. There are even large-scale community built environments, Cern comes to mind, that provide storage services for 10,000s of VMs.

Ceph is by no means limited to OpenStack, however this is where Ceph started gaining traction. Looking at latest OpenStack user survey, Ceph is by a large margin the clear leader for OpenStack storage. Page 42 in OpenStack April 2016 User Survey reveals Ceph is 57% of OpenStack storage. The next is LVM (local storage) with 28% followed by NetApp with 9%. If we remove LVM, Ceph leads any other storage company by 48%, that is incredible. Why is that?

There are several reasons but I will give you my top three:

  • Ceph is a scale-out unified storage platform. OpenStack needs two things from storage: ability to scale with OpenStack itself and do so regardless of block (Cinder), File (Manila) or Object (Swift). Traditional storage vendors need to provide two or three different storage systems to achieve this. They don’t scale the same and in most cases only scale-up in never-ending migration cycles. Their management capabilities are never truly integrated across broad spectrum of storage use-cases.
  • Ceph is cost-effective. Ceph leverages Linux as an operating system instead of something proprietary. You can choose not only whom you purchase Ceph from, but also where you get your hardware. It can be same vendor or different. You can purchase commodity hardware, or even buy integrated solution of Ceph + Hardware from single vendor. There are even hyper-converged options for Ceph that are emerging (running Ceph services on compute nodes).
  • Ceph is OpenSource project just like OpenStack. This allows for a much tighter integration and cross-project development. Proprietary vendors are always playing catch-up since they have secrets to protect and their influence is usually limited in Opensource communities, especially in OpenStack context.

Below is an architecture Diagram that shows all the different OpenStack components that need storage. It shows how they integrate with Ceph and how Ceph provides a unified storage system that scales to fill all these use cases.


source: Red Hat Summit 2016

If you are interested in more topics relating to ceph and OpenStack I recommend following: http://ceph.com/category/ceph-and-openstack/

Ok enough talking about why Ceph and OpenStack are so great, lets get our hands dirty and see how to hook it up!

If you don’t have a Ceph environment you can follow this article on how to set one up quickly.

Glance Integration

Glance is the image service within OpenStack. By default images are stored locally on controllers and then copied to compute hosts when requested. The compute hosts cache the images but they need to be copied again, every time an image is updated.

Ceph provides backend for Glance allowing images to be stored in Ceph, instead of locally on controller and compute nodes. This greatly reduces network traffic for pulling images and increases performance since Ceph can clone images instead of copying them. In addition it makes migrating between OpenStack deployments or concepts like multi-site OpenStack much simpler.

Install ceph client used by Glance.

[root@osp9 ~]# yum install -y python-rbd

Create Ceph user and set home directory to /etc/ceph.

[root@osp9 ~]# mkdir /etc/ceph
[root@osp9 ~]# useradd ceph
[root@osp9 ~]# passwd ceph

Add ceph user to sudoers.

cat << EOF >/etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL
Defaults:ceph !requiretty

On Ceph admin node.

Create Ceph RBD Pool for Glance images.

[ceph@ceph1 ~]$ sudo ceph osd pool create images 128

Create keyring that will allow Glance access to pool.

[ceph@ceph1 ~]$ sudo ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rdb_children, allow rwx pool=images' -o /etc/ceph/ceph.client.images.keyring

Copy the keyring to /etc/ceph on OpenStack controller.

[ceph@ceph1 ~]$ scp /etc/ceph/ceph.client.images.keyring root@osp9.lab:/etc/ceph

Copy /etc/ceph/ceph.conf configuration to OpenStack controller.

[ceph@ceph1 ~]$ scp /etc/ceph/ceph.conf root@osp9.lab:/etc/ceph

Set permissions so Glance can access Ceph keyring.

[root@osp9 ~(keystone_admin)]# chgrp glance /etc/ceph/ceph.client.images.keyring
[root@osp9 ~(keystone_admin)]#chmod 0640 /etc/ceph/ceph.client.images.keyring

Add keyring file to Ceph configuration.

[root@osp9 ~(keystone_admin)]# vi /etc/ceph/ceph.conf
keyring = /etc/ceph/ceph.client.images.keyring

Create backup of original Glance configuration.

[root@osp9 ~(keystone_admin)]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig

Update Glance configuration.

[root@osp9 ~]# vi /etc/glance/glance-api.conf
stores = glance.store.rbd.Store
default_store = rbd
rbd_store_pool = images
rbd_store_user = images
rbd_store_ceph_conf = /etc/ceph/ceph.conf

Restart Glance.

[root@osp9 ~(keystone_admin)]# systemctl restart openstack-glance-api

Download Cirros images and add it into Glance.

[root@osp9 ~(keystone_admin)]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Convert QCOW2 to RAW. It is recommended for Ceph to always use RAW format.

[root@osp9 ~(keystone_admin)]#qemu-img convert cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.raw

Add image to Glance.

[root@osp9 ~(keystone_admin)]#glance image-create --name "Cirros 0.3.4" --disk-format raw --container-format bare --visibility public --file cirros-0.3.4-x86_64-disk.raw
| Property | Value |
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2016-09-07T12:29:23Z |
| disk_format | qcow2 |
| id | a55e9417-67af-43c5-a342-85d2c4c483f7 |
| min_disk | 0 |
| min_ram | 0 |
| name | Cirros 0.3.4 |
| owner | dd6a4ed994d04255a451da66b68a8057 |
| protected | False |
| size | 13287936 |
| status | active |
| tags | [] |
| updated_at | 2016-09-07T12:29:27Z |
| virtual_size | None |
| visibility | public |

Check that glance image exists in Ceph.

[ceph@ceph1 ceph-config]$ sudo rbd ls images
[ceph@ceph1 ceph-config]$ sudo rbd info images/a55e9417-67af-43c5-a342-85d2c4c483f7
rbd image 'a55e9417-67af-43c5-a342-85d2c4c483f7':
 size 12976 kB in 2 objects
 order 23 (8192 kB objects)
 block_name_prefix: rbd_data.183e54fd29b46
 format: 2
 features: layering, striping
 stripe unit: 8192 kB
 stripe count: 1

Cinder Integration

Cinder is the block storage service in OpenStack. Cinder provides an abstraction around block storage and allows vendors to integrate by providing a driver. In Ceph, each storage pool can be mapped to a different Cinder backend. This allows for creating storage services such as gold, silver or bronze. You can decide for example that gold should be fast SSD disks that are replicated three times, while silver only should be replicated two times and bronze should use slower disks with erasure coding.

Create Ceph pool for cinder volumes.

[ceph@ceph1 ~]$ sudo ceph osd pool create  128

Create keyring to grant cinder access.

[ceph@ceph1 ~]$ sudo ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' -o /etc/ceph/ceph.client.volumes.keyring

Copy keyring to OpenStack controllers.

[ceph@ceph1 ~]$ scp /etc/ceph/ceph.client.volumes.keyring root@osp9.lab:/etc/ceph

Create file that contains just the authentication key on OpenStack controllers.

[ceph@ceph1 ~]$ sudo ceph auth get-key client.volumes |ssh osp9.lab tee client.volumes.key

Set permissions on keyring file so it can be accessed by Cinder.

[root@osp9 ~(keystone_admin)]# chgrp cinder /etc/ceph/ceph.client.volumes.keyring
[root@osp9 ~(keystone_admin)]# chmod 0640 /etc/ceph/ceph.client.volumes.keyring

Add keyring to Ceph configuration file on OpenStack controllers.

[root@osp9 ~(keystone_admin)]#vi /etc/ceph/ceph.conf

keyring = /etc/ceph/ceph.client.volumes.keyring

Give KVM Hypervisor access to Ceph.

[root@osp9 ~(keystone_admin)]# uuidgen |tee /etc/ceph/cinder.uuid.txt

Create a secret in virsh so KVM can access Ceph pool for cinder volumes.

[root@osp9 ~(keystone_admin)]#vi /etc/ceph/cinder.xml

<secret ephemeral="no" private="no">
 <usage type="ceph">
 <name>client.volumes secret</name>
[root@osp9 ceph]# virsh secret-define --file /etc/ceph/cinder.xml
[root@osp9 ~(keystone_admin)]# virsh secret-set-value --secret ce6d1549-4d63-476b-afb6-88f0b196414f --base64 $(cat /etc/ceph/client.volumes.key)

Add Ceph backend for Cinder.

[root@osp9 ~(keystone_admin)]#vi /etc/cinder/cinder.conf

volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = volumes
rbd_secret_uuid = ce6d1549-4d63-476b-afb6-88f0b196414f

Restart Cinder service on all controllers.

[root@osp9 ~(keystone_admin)]# openstack-service restart cinder

Create a cinder volume.

[root@osp9 ~(keystone_admin)]# cinder create --display-name="test" 1
| Property | Value |
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-09-08T10:58:17.000000 |
| description | None |
| encrypted | False |
| id | d251bb74-5c5c-4c40-a15b-2a4a17bbed8b |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | test |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | dd6a4ed994d04255a451da66b68a8057 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | None |
| user_id | 783d6e51e611400c80458de5d735897e |
| volume_type | None |

List new cinder volume

[root@osp9 ~(keystone_admin)]# cinder list
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
| d251bb74-5c5c-4c40-a15b-2a4a17bbed8b | available | test | 1 | - | false | |

List cinder volume in ceph.

[ceph@ceph1 ~]$ sudo rbd ls volumes
[ceph@ceph1 ~]$ sudo rbd info volumes/volume-d251bb74-5c5c-4c40-a15b-2a4a17bbed8b
rbd image 'volume-d251bb74-5c5c-4c40-a15b-2a4a17bbed8b':
 size 1024 MB in 256 objects
 order 22 (4096 kB objects)
 block_name_prefix: rbd_data.2033b50c26d41
 format: 2
 features: layering, striping
 stripe unit: 4096 kB
 stripe count: 1

Integrating Ceph with Nova Compute

Nova is the compute service within OpenStack. Nova stores virtual disks images associated with running VMs by default, locally on Hypervisor under /var/lib/nova/instances. There are a few drawbacks to using local storage on compute nodes for virtual disk images:

  • Images are stored under root filesystem. Large images can cause filesystem to fill up, thus crashing compute nodes.
  • A disk crash on compute node could cause loss of virtual disk and as such a VM recovery would be impossible.

Ceph is one of the storage backends that can integrate directly with Nova. In this section we will see how to configure that.

[ceph@ceph1 ~]$ sudo ceph osd pool create vms 128

Create authentication keyring for Nova.

[ceph@ceph1 ~]$ sudo ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring

Copy keyring to OpenStack controllers.

[ceph@ceph1 ~]$ scp /etc/ceph/ceph.client.nova.keyring root@osp9.lab:/etc/ceph

Create key file on OpenStack controllers.

[ceph@ceph1 ~]$ sudo ceph auth get-key client.nova |ssh osp9.lab tee client.nova.key

Set permissions on keyring file so it can be accessed by Nova service.

[root@osp9 ~(keystone_admin)]# chgrp nova /etc/ceph/ceph.client.nova.keyring
[root@osp9 ~(keystone_admin)]# chmod 0640 /etc/ceph/ceph.client.nova.keyring

Ensure the required rpm packages are installed.

[root@osp9 ~(keystone_admin)]# yum list installed python-rbd ceph-common
Loaded plugins: product-id, search-disabled-repos, subscription-manager
Installed Packages
ceph-common.x86_64 1:0.94.5-15.el7cp @rhel-7-server-rhceph-1.3-mon-rpms
python-rbd.x86_64 1:0.94.5-15.el7cp @rhel-7-server-rhceph-1.3-mon-rpms

Update Ceph configuration.

[root@osp9 ~(keystone_admin)]#vi /etc/ceph/ceph.conf

keyring = /etc/ceph/ceph.client.nova.keyring

Give KVM access to Ceph.

[root@osp9 ~(keystone_admin)]# uuidgen |tee /etc/ceph/nova.uuid.txt

Create a secret in virsh so KVM can access Ceph pool for cinder volumes.

[root@osp9 ~(keystone_admin)]#vi /etc/ceph/nova.xml

<secret ephemeral="no" private="no">
<usage type="ceph">
<name>client.volumes secret</name>
[root@osp9 ~(keystone_admin)]# virsh secret-define --file /etc/ceph/nova.xml
[root@osp9 ~(keystone_admin)]# virsh secret-set-value --secret c89c0a90-9648-49eb-b443-c97adb538f23 --base64 $(cat /etc/ceph/client.nova.key)

Make backup of Nova configuration.

[root@osp9 ~(keystone_admin)]# cp /etc/nova/nova.conf /etc/nova/nova.conf.orig

Update Nova configuration to use Ceph backend.

[root@osp9 ~(keystone_admin)]#vi /etc/nova/nova.conf
force_raw_images = True
disk_cachemodes = writeback

images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = c89c0a90-9648-49eb-b443-c97adb538f23

Restart Nova services.

[root@osp9 ~(keystone_admin)]# systemctl restart openstack-nova-compute

List Neutron networks.

[root@osp9 ~(keystone_admin)]# neutron net-list
| id | name | subnets |
| 4683d03d-30fc-4dd1-9b5f-eccd87340e70 | private | ef909061-34ee-4b67-80a9-829ea0a862d0 |
| 8d35a938-5e4f-46a2-8948-b8c4d752e81e | public | bb2b65e7-ab41-4792-8591-91507784b8d8 |

Start ephemeral VM instance using Cirros image that was added in the steps for Glance.

[root@osp9 ~(keystone_admin)]# nova boot --flavor m1.small --nic net-id=4683d03d-30fc-4dd1-9b5f-eccd87340e70 --image='Cirros 0.3.4' cephvm
| Property | Value |
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | wzKrvK3miVJ3 |
| config_drive | |
| created | 2016-09-08T11:41:29Z |
| flavor | m1.small (2) |
| hostId | |
| id | 85c66004-e8c6-497e-b5d3-b949a1666c90 |
| image | Cirros 0.3.4 (a55e9417-67af-43c5-a342-85d2c4c483f7) |
| key_name | - |
| metadata | {} |
| name | cephvm |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | dd6a4ed994d04255a451da66b68a8057 |
| updated | 2016-09-08T11:41:33Z |
| user_id | 783d6e51e611400c80458de5d735897e |

Wait until the VM is active.

[root@osp9 ceph(keystone_admin)]# nova list
| ID | Name | Status | Task State | Power State | Networks |
| 8ca3e74e-cd52-42a6-acec-13a5b8bda53c | cephvm | ACTIVE | - | Running | private= |

List images in Ceph vms pool. We should now see the image is stored in Ceph.

[ceph@ceph1 ~]$ sudo rbd -p vms ls


Unable to delete Glance Images stored in Ceph RBD

[root@osp9 ceph(keystone_admin)]# nova image-list
| ID | Name | Status | Server |
| a55e9417-67af-43c5-a342-85d2c4c483f7 | Cirros 0.3.4 | ACTIVE | |
| 34510bb3-da95-4cb1-8a66-59f572ec0a5d | test123 | ACTIVE | |
| cf56345e-1454-4775-84f6-781912ce242b | test456 | ACTIVE | |
[root@osp9 ceph(keystone_admin)]# rbd -p images snap unprotect cf56345e-1454-4775-84f6-781912ce242b@snap
[root@osp9 ceph(keystone_admin)]# rbd -p images snap rm cf56345e-1454-4775-84f6-781912ce242b@snap
[root@osp9 ceph(keystone_admin)]# glance image-delete cf56345e-1454-4775-84f6-781912ce242b


In this article we discussed how OpenStack and Ceph fit perfectly together. We discussed some of the use cases around glance, cinder and nova. Finally we went through steps to integrate Ceph with those use cases. I hope you enjoyed the article and found the information useful. Please share your feedback.

Happy Cephing!

(c) 2016 Keith Tenzer


Ceph 1.3 Lab Installation and Configuration Guide




In this article we will setup a Ceph 1.3 cluster for purpose of learning or a lab environment.


Ceph Lab Environment

For this environment you will need three VMs (ceph1, ceph2 and ceph3). Each should have 20GB root disk and 100GB data disk. Ceph has three main components: Admin console, Monitors and OSDs.

Admin console – UI and CLI used for managing Ceph cluster. In this environment we will install on ceph1.

Monitors – Monitor health of Ceph cluster. One or more monitors forms a paxos part-time parliment, providing extreme reliability and durability of cluster membership. Monitors maintain the various maps: monitor, osd, placement group (pg) and crush. Monitors will be installed on ceph1, ceph2 and ceph3.

OSDs – Object storage daemon handles storing data, recovery, backfilling, rebalancing and replication. OSDs sit on top of a disk / filesystem. Bluestore enables OSDs to bypass filesystem but is not an option in Ceph 1.3. An OSD will be installed on ceph1, ceph2 and ceph3.

On All Cephs nodes.

#subscription-manager repos --disable=*
#subscription-manager repos --enable=rhel-7-server-rpms
#subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhceph-1.3-calamari-rpms --enable=rhel-7-server-rhceph-1.3-installer-rpms --enable=rhel-7-server-rhceph-1.3-tools-rpms

Configure firewalld.

sudo systemctl start firewalld
sudo systemctl enable firewalld
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent
sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
sudo firewall-cmd --reload

Configure NTP.

yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd

Ensure NTP is scychronozing.

ntpq -p
remote refid st t when poll reach delay offset jitter
 +privatewolke.co 2 u 12 64 1 26.380 -2.334 2.374
 +ridcully.episod 3 u 11 64 1 26.626 -2.425 0.534
 *s1.kelker.info 2 u 12 64 1 26.433 -6.116 1.030
 sircabirus.von- .STEP. 16 u - 64 0 0.000 0.000 0.000

Create ceph user for deployer.

#useradd ceph
#passwd ceph
#cat << EOF >/etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL
Defaults:ceph !requiretty
#chmod 0440 /etc/sudoers.d/ceph
#su - ceph
#ssh-copy-id ceph@ceph1
#ssh-copy-id ceph@ceph2
#ssh-copy-id ceph@ceph3

Set SELinux to permissive. Ceph 2.0 now supports SELinux but for 1.3 it was not possible out-of-box.

#vi /etc/selinux/config


Create ceph-config dir.

mkdir ~/ceph-config
cd ~/ceph-config

On Monitors.

#subscription-manager repos --enable=rhel-7-server-rhceph-1.3-mon-rpms
#yum update -y

On OSD Nodes.

#subscription-manager repos --enable=rhel-7-server-rhceph-1.3-osd-rpms
#yum update -y

On admin node (ceph1).

[ceph@ceph1 ~]$ vi .ssh/config

Host node1
  Hostname ceph1
  User ceph
Host node2
  Hostname ceph2
  User ceph
Host node3
  Hostname ceph3
  User ceph
chmod 600 ~/.ssh/config

On admin node (ceph1).

Setup Admin Console and Calamari.

#sudo yum -y install ceph-deploy calamari-server calamari-clients
#sudo calamari-ctl initialize
#su - ceph
[ceph@ceph1 ceph-config]$cd ~/ceph-config

Create Ceph Cluster.

#ceph-deploy new ceph1 ceph2 ceph3

Deploy Ceph monitors and OSDs.

[ceph@ceph1 ceph-config]$ceph-deploy install --mon ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$ceph-deploy install --osd ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$ceph-deploy --overwrite-conf mon create-initial

Connect Ceph monitors to Calamari.

[ceph@ceph1 ceph-config]$ceph-deploy calamari connect --master ceph1.lab ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$ceph-deploy install --cli ceph1
[ceph@ceph1 ceph-config]$ceph-deploy admin ceph1

Check Ceph quorum status.

[ceph@ceph1 ceph-config]$ sudo ceph quorum_status --format json-pretty

 "election_epoch": 6,
 "quorum": [
 "quorum_names": [
 "quorum_leader_name": "ceph1",
 "monmap": {
 "epoch": 1,
 "fsid": "188aff9b-7da5-46f3-8eb8-465e014a472e",
 "modified": "0.000000",
 "created": "0.000000",
 "mons": [
 "rank": 0,
 "name": "ceph1",
 "addr": "\/0"
 "rank": 1,
 "name": "ceph2",
 "addr": "\/0"
 "rank": 2,
 "name": "ceph3",
 "addr": "\/0"

Set crush tables to optimal.

[ceph@ceph1 ceph-config]$sudo ceph osd crush tunables optimal

Configure OSDs.

[ceph@ceph1 ceph-config]$ceph-deploy disk zap ceph1:vdb ceph2:vdb ceph3:vdb
[ceph@ceph1 ceph-config]$ceph-deploy osd prepare ceph1:vdb ceph2:vdb ceph3:vdb
[ceph@ceph1 ceph-config]$ceph-deploy osd activate ceph1:vdb1 ceph2:vdb1 ceph3:vdb1

Connect Calamari to Ceph nodes.

[ceph@ceph1 ceph-config]$ceph-deploy calamari connect --master ceph1.lab ceph1 ceph2 ceph3

Tips and Tricks

Remove OSD from Ceph

[ceph@ceph1 ~]$sudo ceph osd out osd.0
[ceph@ceph1 ~]$sudo ceph osd crush remove osd.0
[ceph@ceph1 ~]$sudo ceph auth del osd.0
[ceph@ceph1 ~]$sudo ceph osd down 0
[ceph@ceph1 ~]$sudo ceph osd rm 0

Ceph Placement Group Calculation for Pool

  • OSDs * 100 / Replicas
  • PGs should always be power of two 62, 128, 256, etc

Re-deploy Ceph

In case at anytime you want to start over you can run below commands to uninstall Ceph. This of course deletes any data so be careful.

[ceph@ceph1 ~]$ sudo service ceph restart osd.3
ceph-deploy purge <ceph-node> [<ceph-node>]


In this article we installed a Ceph cluster on virtual machines. We deployed the cluster, setup monitors and configured OSDs.  This environment should provide the basis for a journey into software-defined storage and Ceph. The economics of scale have brought down barriers and paved the way for a software-defined world. Storage is only the next logical boundry. Ceph being an OpenSource project is already the defacto software-defined standard and is in position to become the key beneficiary of software-defined storage. I hope you found the information in this article of use, please share your experiences.

Happy Cephing!

(c) 2016 Keith Tenzer



Ceph: the future of Storage




Since joining Red Hat in 2015, I have intentionally stayed away from the topic of storage. My background is storage but I wanted to do something else as storage became completely mundane and frankly boring. Why?

Storage hasn’t changed much in 20 years. I started my career as a Linux and Storage engineer in 2000 and everything that existed then, exists today. Only things became bigger, faster, cheaper due to technologies such as flash.

I realized in late 2015 that storage industry is starting a challenging period for all vendors but didn’t really have feeling for when that could lead to real change. I did know that the monolithic storage array built on proprietary Linux/Unix with proprietary x86 hardware we all know and love was a thing of the past. If you think about it storage is a scam today, you get opensource software running on x86 hardware packaged as a proprietary solution that doesn’t interoperate with anything else. So you get none of the value of opensource and pay extra for it. I like to think that economics like gravity, eventually always wins.


There are many challenges facing our storage industry but I will focus on three: cost, scale and agility.


Linux has become an equalizer and storage companies not building on Linux are forced to be operating system companies in addition to storage companies, a distinct disadvantage. Their R&D costs to maintain their proprietary storage operating systems reduce overall value and increase costs of their products. This is why we saw a large amount of storage startups over last 3-5 years, because of Linux and opensource. In addition most storage platforms don’t allow you to choose hardware and therefore you are often paying a premium for standard x86. Disks are a great example, typically they cost twice what you would pay through Amazon. Oh but our disks are tested and have a mean time between failure of xxxxxxx. That is total ********, what about if disks failures weren’t a big deal, didn’t cause impact and storage system automatically adjusted?

Some vendors allow you to choose different hardware platforms but they are usually limited and certainly you can’t get the storage software itself from multiple vendors. While these points may be interesting, at the end of the day cost comes down to one thing. Everyone is measured against Amazon S3. You are either cheaper than S3 (3 cents per GB per Month) or you have some explaining to do. If we consider small or medium environments this may be doable with traditional storage but, as soon as scale and complexity come into play (multiple use-cases with block, file and object) those costs explode.


Scalability is a very complex problem to solve. Sure everything scales until it doesn’t but jokes aside, we have reached a point where scalability needs to be applied more generally, especially in storage. Storage arrays today are use-case driven. Each customer has many use-cases and the larger the customer the more use-cases. This means many types of dissimilar storage systems. While a single storage system may scale to some degree, many storage systems together don’t. Customers are typically stuck in a 3-year cycle of forklift upgrades because storage systems don’t truly scale-out they only scale-up.


The result of many storage appliances and arrays is technology sprawl which in turn creates technology debt. Storage vendors don’t even have a vision for data management within their own ecosystem no less interoperating with other vendors. Everything is completely fragmented to point where a few use-cases equals a new storage system. Storage systems today require a fairly low entry cost for management overhead but as the environment grows and scales those costs increase further reducing agility. They don’t stay consistent, how could they when you don’t have a unified data management strategy. Storage systems are designed to prevent failure at all costs, they certainly don’t anticipate failure. At scale we have more failures, this in turn correlates to more time spent keeping lights on. The goal of every organization is to reduce that and maximize time spent on innovation. Finally automation suffers and it is increasingly harder to build blueprints around storage. There is just too much variation.

I could certainly go on and there are other challenges to discuss but I think you get the picture. We have reached the dead-end of storage.

Why Software-defined?

As mentioned the main problem I see today is for every use-case, there is a storage array or appliance. The startups are all one-trick ponies solving only a few use cases. Traditional storage vendors throw a different storage systems at each use-case then call it a solution. You end with not real data management strategy. I laugh when I hear vendors talking about data lakes. If you buy into storage array mindset, you end up at same place, a completely unmanageable environment at scale where operational costs are not linear but constantly going up, welcome to most storage environments today.

As complexity increases you reach a logical point where abstractions are needed. Today storage not only needs to provide file, block and object but also needs to interoperate with large ecosystem of vendors, cloud providers and applications. Decoupling the data management software from the hardware is the logical next step. This is the same thing we have already observed with server virtualization and are observing in networking with NFV. The economics of cost and advantages of decoupling hardware and software simply make sense. Organizations have been burned over and over making technology decisions that later are replaced or reverted, because newer better technologies become available in other platforms. Software-defined storage allows easy introduction of new technologies without having to purchase new storage system because your old storage was designed before that technology was invented. Finally storage migrations. Aren’t we tired of always migrating data when changing storage systems? A common data management platform using common x86 hardware and Linux could do away with storage migrations.

Why Ceph?


Ceph has become the defacto standard for software-defined storage. Currently the storage industry is at beginning of major disruption period where software-defined storage will drive out traditional proprietary storage systems. Ceph is of course opensource, which enables a rich eco-system of vendors to provide storage systems based on Ceph. The software-defined world is not possible without opensource and doing things the opensource way.

Ceph delivers exactly what is needed to disrupt the storage industry. Ceph provides a unified scale-out storage system based on common x86 hardware, is self healing and not only anticipates failures but expects them. Ceph does away with storage migrations and since hardware is decoupled give you choice of when to deploy new hardware technologies.

You can buy Ceph separately from hardware, so you have choice not only whom you buy Ceph from (Red Hat, Suse, Mirantis, Unbuntu, etc) but also who you purchase hardware from (HP, DELL, Fujitsu, IBM, etc). In addition you can even buy ceph together with hardware for an integrated appliance (SanDisk, Fujitsu, etc). You have choice and are free from vendor lock-in.

Ceph is extremely cost efficient. Even the most expensive all-flash, integrated solutions are less than S3 (3 cents per GB per Month). If you really want to go cheap you can by off-the-shelf commodity hardware from companies like SuperMicro and still get enterprise Ceph from Red Hat, Suse, Ubuntu, etc while being a lot cheaper than S3.

Ceph scales, one example I will give is Cern 30 PB test. Ceph can be configured to optimize different workloads such as block, file and object. You can create storage pools and decide to co-locate journals or put journals on SSDs for optimal performance. Ceph allows you to tune your storage to specific use-cases, while maintaining unified approach. In tech-preview is a new feature called bluestore. This allows Ceph to completely bypass file-system layer and store data directly on raw devices. This will greatly increase performance and there is a ton of optimizations planned after that, this is just the beginning!

Ceph enables agility by providing unified storage system that supports file, block and object. Ceph can run on VMs or physical hardware so you can easily bridge private and public clouds. Ceph provides a storage management layer for anything you can present as a disk device. Finally Ceph simplifies management, it is the same level of effort to manage a 10 node Ceph cluster or a 100 node Ceph cluster so running costs are linear.

Below is a diagram showing how Ceph addresses file, block and object storage using a unified architecture built around RADOS.


source: http://docs.ceph.com/docs/hammer/architecture


In this article we spent time discussing current storage challenges and the value of not just software-defined storage but also Ceph. We can’t keep doing what we have been doing for last 20+ years in storage industry. The economics of scale have brought down barriers and paved the way for a software-defined world. Storage is only the next logical boundry. Ceph being an OpenSource project is already the defacto software-defined standard and is in position to become the key beneficiary as software-defined storage becomes more mainstream. If you are interested in Ceph I will be producing some how-to guides soon, so stay tuned. Please feel free to debate whether you agree or disagree with my views. I welcome all feedback.

(c) 2016 Keith Tenzer

[Short Tip] Fix mount problems in RHV during GlusterFS mounts


Gluster Logo

When using Red Hat Virtualization or oVirt together with GLusterFS, there might be a strange error during the first creation of a storage domain:

Failed to add Storage Domain xyz.

One of the rather easy to fix reasons might be a permission problem: an initial Gluster exported file system belongs to the user root. However, the virtualization manager (ovirt-m bzw. RHV-M) does not have root rights and such needs another ownership.

In such cases, the fix is to mount the exported volume & set the user rights to the rhv-m user.

$ sudo mount -t glusterfs /mnt
# cd /mnt/
# chown -R 36.36 .

Afterwarsd, the volume can be mounted properly. Some more general details can be found at RH KB 78503.

Filed under: Business, Cloud, Debian & Ubuntu, Fedora & RHEL, Linux, Shell, Short Tip, SUSE, Technology, Virtualization

OpenStack Mitaka Lab Installation and Configuration Guide




In this article we will focus on installing and configuring OpenStack Mitaka using RDO and the packstack installer. RDO is a community platform around Red Hat’s OpenStack Platform. It allows you to test the latest OpenStack capabilities on a stable platform such as Red Hat Enterprise Linux (RHEL) or CentOS. This guide will take you through installing the OpenStack Liberty release, configuring networking, security groups, flavors, images and are other OpenStack related services. The outcome is a working OpenStack environment based on the Mitaka release that you can use as a baseline for testing your applications with OpenStack capabilities.

Install and Configure OpenStack Liberty

  • Install RHEL or CentOS 7.1.
  • Ensure name resolution is working.
# vi /etc/hosts osp9.lab osp9
  • Set hostname.
# hostnamectl set-hostname osp9.lab
  • Disable firewalld since this is for a lab environment.
# systemctl disable firewalld
# systemctl stop firewalld
  • Disable NetworkManager, it is still not recommended for Liberty (at least RDO).
# systemctl stop NetworkManager
# systemctl disable NetworkManager
  • For RHEL systems register with subscription manager.
# subscription-manager register
# subscription-manager subscribe --auto
# subscription-manager repos --disable=*
# subscription-manager repos --enable=rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-extras-rpms 
# subscription-manager repos --enable=rhel-7-server-rh-common-rpms
# subscription-manager repos --enable=rhel-7-server-openstack-9-rpms
  • Install yum-utils and update the system.
# yum install -y yum-utils
# yum update -y
  • Reboot.
# systemctl reboot
  • Install packstack packages.
# yum install -y openstack-packstack
You can install packstack by providing command-line options or using the answers file.

Option 1: Install using command-line options

 # packstack --allinone --os-neutron-ovs-bridge-mappings=extnet:br-ex \
 --os-neutron-ovs-bridge-interfaces=br-ex:eth0 \
 --os-neutron-ml2-type-drivers=vxlan,flat \
 --os-heat-install=y --os-heat-cfn-install=y \
 --os-sahara-install=y --os-trove-install=y \
 --os-neutron-lbaas-install=y \

Option 2: Install using answers file

  • Create packstack answers file for customizing the installer.
# packstack --gen-answer-file /root/answers.txt
  • Update the packstack answers file and enable other OpenStack services. Note: as of the writing of this guide SSL is not working in combination with Horizon so don’t enable SSL.
# vi /root/answers.txt
  • Install OpenStack Liberty using packstack.
# packstack --answer-file /root/answers.txt
  • Source the keystone admin profile.
# . /root/keystonerc_admin
  • Check status of openstack services.
# openstack-status
  • Backup the ifcfg-etho script.
# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/
  • Configure external bridge for floating ip networks.
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
  • Add the eht0 physical interface to the br-ex bridge in openVswitch for floating IP networks.
# ovs-vsctl add-port br-ex eth0 ; systemctl restart network.service

Configure OpenStack

  • Create private network.
# neutron net-create private
# neutron subnet-create private --name private_subnet --allocation-pool start=,end=
  • Create public network. Note: these steps assume the physical network connected to eth0 is
# neutron net-create public --router:external
# neutron subnet-create public --name public_subnet --allocation-pool start=,end= --disable-dhcp --gateway
  • Add a new router and configure router interfaces.
# neutron router-create router1 --ha False
# neutron router-gateway-set router1 public
# neutron router-interface-add router1 private_subnet
  • Upload a glance image. In this case we will use a Cirros image because it is small and thus good for testing OpenStack.
# yum install -y wget
# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
# glance image-create --name "Cirros 0.3.4" --disk-format qcow2 --container-format bare --visibility public --file /root/cirros-0.3.4-x86_64-disk.img
  • Create a new m1.nano flavor for running Cirros image.
# nova flavor-create m1.nano 42 64 0 1
  • Create security group and allow all TCP ports.
# nova secgroup-create all "Allow all tcp ports"
# nova secgroup-add-rule all TCP 1 65535
  • Create security group for base access
# nova secgroup-create base "Allow Base Access"
# nova secgroup-add-rule base TCP 22 22
# nova secgroup-add-rule base TCP 80 80
# nova secgroup-add-rule base ICMP -1 -1
  • Create a private ssh key for connecting to instances remotely.
# nova keypair-add admin
  • Create admin.pem file and add private key from output of keypair-add command.
# vi /root/admin.pem
# chmod 400 /root/admin.pem
  • List the network IDs.
# neutron net-list
 | id | name | subnets |
 | d4f3ed19-8be4-4d56-9f95-cfbac9fdf670 | private | 92d82f53-6e0b-4eef-b8b9-cae32cf40457     |
 | 37c024d6-8108-468c-bc25-1748db7f5e8f | public  | 22f2e901-186f-4041-ad93-f7b5ccc30a81 |
  • Start an instance and make sure to replace network id from above command.
# nova boot --flavor m1.nano --image "Cirros 0.3.4" --nic net-id=92d82f53-6e0b-4eef-b8b9-cae32cf40457 --key-name admin --security-groups all mycirros
  • Create a floating IP and assign it to the mycirros instance.
# nova floating-ip-create
# nova floating-ip-associate mycirros <FLOATING IP>
  • Connect to mycirros instance using the private ssh key stored in the admin.pem file. Note: The first floating IP in the range
# ssh -i admin.pem cirros@

Nova Nested Virtualization

Most OpenStack Lab or test environments will install OpenStack on a hypervisor platform inside virtual machines. I would strongly recommend KVM. If you are running OpenStack on KVM (Nova nested virtualization) make sure to follow these tips and tricks to get the best performance.


This article was intended as a hands on guide for standing up an OpenStack Mitaka lab environment using RDO. As mentioned RDO is a stable community platform built around Red Hat’s OpenStack Platform. It provides the ability to test the latest OpenStack features against either an enterprise platform (RHEL) or community platform (CentOS). Hopefully you found the information in this article useful. If you have anything to add or feedback, feel free to leave your comments.Happy OpenStacking!(c) 2016 Keith Tenzer

Cloud Systems Management: Satellite 6.2 Getting Started Guide




In this article we will look at how to install Satellite 6.2 and configure a base environment. This article builds on a similar article I published for Satellite 6.1. In addition to installing and configuring Satellite we will also look at one of the long awaited new features remote-cmd execution.

If you are coming from Satellite 5 world then you will want to familiarize yourself with the concepts and how they apply in Satellite 6. The biggest change is around how you manage content through stages (life-cycle enviornments) but there is also a lot more.


source: https://www.windowspro.de/sites/windowspro.de/files/imagepicker/6/Red_Hat_Satellite_6_Life_Cycle.png


Satellite 6.2 continues to build on that release and provides the following features:

  • Automated Workflows — This includes remote execution, scheduling for remote execution jobs and expanded bootstrap and provisioning options.

  • Air-gapped security and federation — Inter-Satellite sync is now available to export RPM content from one Satellite to import into another

  • Software Management Improvements — Simplified Smart Variable management is now available.

  • Capsule improvements — Users now have deeper insight into Capsule health and overall performance; Capsules are lighter-weight and can be configured to store only the content that has been requested by its clients; and a new Reference Architecture including deploying a Highly Available Satellite Capsule is now available.

  • Atomic OSTree and containers — Mirror, provision and manage RHEL Atomic hosts and content with Satellite; mirror container repositories such as Red Hat Registry, DockerHub™ and other 3rd-party sources; and Satellite provides a secure, curated point of entry for container content

  • Enhanced documentation — Both new and improved documentation is available. (https://access.redhat.com/documentation/red-hat-satellite/)

    • New Guides

      • Virtual Instance Guide (How to configure virt-who)

      • Hammer CLI Guide (How to use Satellite’s CLI)

      • Content Management Guide (How to easily manage Satellite’s content )

      • Quickstart Guide (How to get up and running quickly)

    • Improved/more user-friendly documentation

  • User Guide split to make more topical and easier to follow:

    • Server Administration Guide

    • Host Configuration Guide

    • “Cheat Sheets” available for specific topics (Hammer)

    • Updated Feature Overviews


In order to install Satellite we need a subscription and of course RHEL 6 or 7.

subscription-manager register
subscription-manager list --available
subscription-manager attach --pool=934893843989289
subscription-manager repos --disable "*"


subscription-manager repos --enable=rhel-6-server-rpms \
--enable=rhel-server-rhscl-6-rpms \


subscription-manager repos --enable=rhel-7-server-rpms \
--enable=rhel-server-rhscl-7-rpms \

Update all packages.

# yum update -y

Add Firewall rules.


# iptables -A INPUT -m state --state NEW -p udp --dport 53 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 53 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 67 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 69 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 5647 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 8140 -j ACCEPT \
&& iptables-save > /etc/sysconfig/iptables


# firewall-cmd --add-service=RH-Satellite-6
# firewall-cmd --permanent --add-service=RH-Satellite-6


# yum install chrony

# systemctl start chronyd

# systemctl enable chronyd


Setting up SOS is a good idea to get faster responses from Red Hat support.

#yum install -y sos

DNS Configuration

This is only required if you want to setup an external DNS server. If you use the integrated DNS provided by Satellite you can skip this step.

[root@ipa ]# vi /etc/named.conf
 // named.conf
 // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
 // server as a caching only nameserver (as a localhost DNS resolver only).
 // See /usr/share/doc/bind*/sample/ for example named configuration files.

include "/etc/rndc.key";

controls {
 inet port 953 allow {;; } keys { "capsule"; };

options {
 listen-on port 53 {; };
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 //allow-query { localhost; };
 //forwarders {
 forwarders {;; };

 - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
 - If you are building a RECURSIVE (caching) DNS server, you need to enable
 - If your recursive DNS server has a public IP address, you MUST enable access
 control to limit queries to your legitimate users. Failing to do so will
 cause your server to become part of large scale DNS amplification
 attacks. Implementing BCP38 within your network would greatly
 reduce such attack surface
 recursion yes;

dnssec-enable yes;
 dnssec-validation yes;

/* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";

pid-file "/run/named/named.pid";
 session-keyfile "/run/named/session.key";

logging {
 channel default_debug {
 file "data/named.run";
 severity dynamic;

zone "0.168.192.in-addr.arpa" IN {
 type master;
 file "/var/named/dynamic/0.168.192-rev";
 //allow-update {; };
 update-policy {
 grant capsule zonesub ANY;

zone "lab" IN {
 type master;
 file "/var/named/dynamic/lab.zone";
 //allow-update {; };
 update-policy {
 grant capsule zonesub ANY;


include "/etc/named.rfc1912.zones";
 include "/etc/named.root.key";


[root@ipa ]# cat /var/named/dynamic/
 0.168.192-rev 0.168.192-rev.old lab.zone lab.zone.jnl managed-keys.bind testdns.sh
 [root@ipa dynamic]# cat /var/named/dynamic/lab.zone
 $ORIGIN lab.
 $TTL 86400
 @ IN SOA dns1.lab. hostmaster.lab. (
 2001062501 ; serial
 21600 ; refresh after 6 hours
 3600 ; retry after 1 hour
 604800 ; expire after 1 week
 86400 ) ; minimum TTL of 1 day
 IN NS dns1.lab.
 rhevm IN A
 rhevh01 IN A
 osp8 IN A
 cf IN A
 ipa IN A
 sat6 IN A
 IN AAAA aaaa:bbbb::1
 ose-master IN A
 * 300 IN A
[root@ipa ]# cat /var/named/dynamic/0.168.192-rev
 $TTL 86400 ; 24 hours, could have been written as 24h or 1d
 $ORIGIN 0.168.192.IN-ADDR.ARPA.

@ IN SOA dns1.lab. hostmaster.lab. (
 2001062501 ; serial
 21600 ; refresh after 6 hours
 3600 ; retry after 1 hour
 604800 ; expire after 1 week
 86400 ) ; minimum TTL of 1 day

; Name servers for the zone - both out-of-zone - no A RRs required
 IN NS dns1.lab.
 ; server host definitions
 20 IN PTR rhevm.lab.
 21 IN PTR rhevh01.lab.
 22 IN PTR osp8.lab.
 24 IN PTR cf.lab.
 25 IN PTR ose3-master.lab.
 26 IN PTR ipa.lab.
 27 IN PTR sat6.lab.


Before we start with manual install, a Solution Architect and colleague of mine, Sebastian Hetze, created an automated setup script that also can integrate IDM. I strongly recommend using this script and contributing to further enhancements.


If you are of course doing this to learn then it is definitely good to walk through manual steps so you understand concepts and what is involved.

#yum install -y satellite
With Integrated DNS
# satellite-installer --scenario satellite --foreman-admin-username admin --foreman-admin-password redhat01 --foreman-proxy-dns true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-zone example.com --foreman-proxy-dns-forwarders --foreman-proxy-dns-reverse 0.168.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-range "" --foreman-proxy-dhcp-gateway --foreman-proxy-dhcp-nameservers --foreman-proxy-tftp true --foreman-proxy-tftp-servername $(hostname) --capsule-puppet true --foreman-proxy-puppetca true
Without Integrated DNS
# satellite-installer --scenario satellite --foreman-admin-username admin --foreman-admin-password redhat01 --foreman-proxy-dns true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-zone sat.lab --foreman-proxy-dns-forwarders --foreman-proxy-dns-reverse 0.168.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-range "" --foreman-proxy-dhcp-gateway --foreman-proxy-dhcp-nameservers --foreman-proxy-tftp true --foreman-proxy-tftp-servername $(hostname) --capsule-puppet true --foreman-proxy-puppetca true


At this point you should be able to reach the web UI using HTTPS. In this environment the url is https://sat6.lab.com. Next we need to setup the hammer CLI. Configure hammer so that we automatically pass authentication credentials.

mkdir ~/.hammer
cat > .hammer/cli_config.yml <<EOF
    :host: 'https://sat6.lab/'
    :username: 'admin'
    :password: 'redhat01'


External DNS Configuration

If you setup external DNS then you need to allow Satellite server to update DNS records on your external DNS server.

# vi /etc/foreman-proxy/settings.d/dns.yml
:enabled: true
:dns_provider: nsupdate
:dns_key: /etc/rndc.key
:dns_ttl: 86400
# systemctl restart foreman-proxy

Register Satellite Server in Red Hat Network (RHN).


Assign subscriptions to the Satellite server and download manifest from RHN.


Note: In the next section we will be using the hammer CLI to configure Satellite. In this environment we are using the organization “Default Organization”, you would probably change this to a more specific organization name. If so you need to first create a new organization.

Upload manifest file to Satellite server.

#hammer subscription upload --organization "Default Organization" --file /root/manifest_d586388b-f556-4623-b6a0-9f76857bedbc.zip

Create a subnet in Satellite 6 under Infrastructure->Subnet. In this environment the subnet is and we are using external D.

Screenshot from 2016-08-03 14:33:06

Enable basic repositories.

At minimum you need RHEL Server, Satellite Tools and RH Common.

#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --releasever='7Server' --name 'Red Hat Enterprise Linux 7 Server (RPMs)'
#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --releasever='7Server' --name 'Red Hat Enterprise Linux 7 Server (Kickstart)'
#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --name 'Red Hat Satellite Tools 6.2 (for RHEL 7 Server) (RPMs)'

#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --name 'Red Hat Enterprise Linux 7 Server - RH Common RPMs x86_64 7Server'

Enable EPEL repository for 3rd party packages.

#wget -q https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7  -O /root/RPM-GPG-KEY-EPEL-7
#hammer gpg create --key /root/RPM-GPG-KEY-EPEL-7  --name 'GPG-EPEL-7' --organization "Default Organization"

Create a new product for the EPEL repository. In Satellite 6 products are a groupings of external content outside of RHN. Products can contain RPM repositories, Puppet modules or container images.

#hammer product create --name='EPEL 3rd Party Packages' --organization "Default Organization" --description 'EPEL 3rd Party Packages'
#hammer repository create --name='EPEL 7 - x86_64' --organization "Default Organization" --product='EPEL 3rd Party Packages' --content-type='yum' --publish-via-http=true --url=http://dl.fedoraproject.org/pub/epel/7/x86_64/ --checksum-type=sha256 --gpg-key=GPG-EPEL-7

 Synchronize the repositories. This will take a while as all of the RPM packages will be downloaded. Note: you can also use the –async option to run tasks in parallel.

#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Enterprise Linux 7 Server Kickstart x86_64 7Server'
#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Satellite Tools 6.2 for RHEL 7 Server RPMs x86_64'
#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server'

#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Enterprise Linux 7 Server - RH Common RPMs x86_64 7Server'

#hammer repository synchronize --async --organization "$ORG" --product 'EPEL 3rd Party Packages  --name  'EPEL 7 - x86_64'

Create life cycles for development and production.

#hammer lifecycle-environment create --organization "Default Organization" --description 'Development' --name 'DEV' --label development --prior Library
#hammer lifecycle-environment create --organization "Default Organization" --description 'Production' --name 'PROD' --label production --prior 'DEV'

Create content view for RHEL 7 base.

#hammer content-view create --organization "Default Organization" --name 'RHEL7_base' --label rhel7_base --description 'Core Build for RHEL 7'

#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'Red Hat Enterprise Linux Server' --repository 'Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server'
#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'Red Hat Enterprise Linux Server' --repository 'Red Hat Satellite Tools 6.2 for RHEL 7 Server RPMs x86_64'

#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'Red Hat Enterprise Linux Server' --repository 'Red Hat Enterprise Linux 7 Server - RH Common RPMs x86_64 7Server'

#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'EPEL 3rd Party Packages'  --repository  'EPEL 7 - x86_64'

Publish and promote content view to the environments.

#hammer content-view publish --organization "Default Organization" --name RHEL7_base --description 'Initial Publishing'
#hammer content-view version promote --organization "Default Organization" --content-view RHEL7_base --to-lifecycle-environment DEV
#hammer content-view version promote --organization "Default Organization" --content-view RHEL7_base --to-lifecycle-environment PROD

Add activation keys for both stage environments.

#hammer activation-key create --organization "Default Organization" --description 'RHEL7 Key for DEV' --content-view 'RHEL7_base' --unlimited-hosts --name ak-Reg_To_DEV --lifecycle-environment 'DEV'
#hammer activation-key create --organization "Default Organization" --description 'RHEL7 Key for PROD' --content-view 'RHEL7_base' --unlimited-hosts --name ak-Reg_To_PROD --lifecycle-environment 'PROD'

Add subscriptions to activation keys.

Screenshot from 2016-08-03 12:53:41

Get medium id needed for hostgroup creation.

#hammer medium list
1 | CentOS mirror | http://mirror.centos.org/centos/$version/os/$arch 
8 | CoreOS mirror | http://$release.release.core-os.net 
2 | Debian mirror | http://ftp.debian.org/debian 
9 | Default_Organization/Library/Red_Hat_Server/Red_Hat_Enterprise_Linux_7_Server... | http://sat6.lab/pulp/repos/Default_Organization/Library/content/dist/rhel/ser...
4 | Fedora Atomic mirror | http://dl.fedoraproject.org/pub/alt/atomic/stable/Cloud_Atomic/$arch/os/ 
3 | Fedora mirror | http://dl.fedoraproject.org/pub/fedora/linux/releases/$major/Server/$arch/os/ 
5 | FreeBSD mirror | http://ftp.freebsd.org/pub/FreeBSD/releases/$arch/$version-RELEASE/ 
6 | OpenSUSE mirror | http://download.opensuse.org/distribution/$version/repo/oss 
7 | Ubuntu mirror | http://archive.ubuntu.com/ubuntu 

Create a host group. A host group is a foreman construct and is used for automation of provisioning parameters. A host is provisioned based on its host group. The host group contains kickstart/provisioning templates, OS information, network information, activation keys, parameters, puppet environment and if virtual a compute profile. Note: you will need to change the hostname sat6.lab.com.

#hammer hostgroup create --architecture x86_64 --content-source-id 1 --content-view RHEL7_base --domain lab --lifecycle-environment DEV --locations 'Default Location' --name RHEL7_DEV_Servers --organizations "Default Organization" --puppet-ca-proxy sat6.lab --puppet-proxy sat6.lab --subnet VLAN_0 --partition-table 'Kickstart default' --operatingsystem 'RedHat 7.2' --medium-id 9

Add compute resource for RHEV.

# hammer compute-resource create --provider Ovirt --name RHEV --url https://rhevm.lab/api --organizations "Default Organization" --locations 'Default Location' --user admin@internal --password redhat01

Satellite 6 Bootstrapping

Satellite bootstrapping is the process for taking an already provisioned system and attaching it to the Satellite server. The minimum process is outlined below:

Install Katello package from Satellite server.

#rpm -Uvh http://sat6.lab.com/pub/katello-ca-consumer-latest.noarch.rpm

Subscribe using activation key.

#subscription-manager register --org="Default_Organization" --activationkey="DEV_CORE"

Enable Satellite tools repository and install katello agent.

#yum -y install --enablerepo rhel-7-server-satellite-tools-6.2-rpms katello-agent

In addition to get full functionality you would also need to install and configure Puppet. Many customers are also looking for a solution that would gracefully move a system attached to Satellite 5 into Satellite 6. As such there is a bootstrapping script I can recommend from Evgeni Golov (one of our top Satellite Consultants @Red Hat):


Remote Command Execution

A new feature many Satellite 5 customers have been waiting for is remote-cmd execution. This feature allows you to run and schedule commands on clients connected to Satellite 6 server. You can think of this as poor-man’s Ansible.

Ensure remote-cmd execution is configured for Satellite capsule.

Screenshot from 2016-08-23 12:47:15

Note: If you aren’t using provisioning template “Satellite Kickstart Default” and you upgraded from Satellite 6.1, you will need to re-clone the “Satellite Kickstart Default” template and apply your changes. A snippet was added to “Satellite Kickstart Default” in order to automatically configure foreman-proxy ssh keys.

Screenshot from 2016-08-23 12:53:18.png

For systems that are already provisioned you need to copy foreman-proxy ssh key.

# ssh-copy-id -i /usr/share/foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@

Run “ls” command for a given client using remote-cmd execution.

# hammer job-invocation create --async --inputs "command=ls -l /root" --search-query name=client1.lab.com --job-template "Run Command - SSH Default"

Run command using input file or script.

# hammer job-invocation create --async --input-files command=/root/script.sh --search-query name=client1.lab.com --job-template "Run Command - SSH Default"

Run command on multiple hosts.

# hammer job-invocation create --async --inputs "command=ls -l /root" --search-query "name ~ client1.lab.com|client2.lab.com" --job-template "Run Command - SSH Default"

List jobs that are running, completed or failed.

# hammer job-invocation list
4 | Run ls -l /root | succeeded | 1 | 0 | 0 | 1 | 2016-08-23 11:10:50 UTC
3 | Run puppet agent -t | succeeded | 1 | 0 | 0 | 1 | 2016-08-23 10:43:20 UTC
2 | Run puppet agent -t | failed | 0 | 1 | 0 | 1 | 2016-08-23 10:37:18 UTC
1 | Run puppet agent -t | failed | 0 | 1 | 0 | 1 | 2016-08-23 10:19:41 UTC

Show details of a completed job.

# hammer job-invocation info --id 3
ID: 4
Description: Run ls -l /root
Status: succeeded
Success: 2
Failed: 0
Pending: 0
Total: 2
Start: 2016-08-23 11:34:05 UTC
Job Category: Commands
Cron line: 
Recurring logic ID: 
 - client1.lab.com
 - client2.lab.com

Show output from a completed command for a given host. Satellite will show stdout and return code from command.

# hammer job-invocation output --host client1.lab.com --id 4
total 36
-rw-------. 1 root root 4256 Aug 23 10:34 anaconda-ks.cfg
-rw-r--r--. 1 root root 21054 Aug 23 10:34 install.post.log
-rw-r--r--. 1 root root 54 Aug 23 10:33 install.postnochroot.log
Exit status: 0


In this article we learned how to deploy Satellite 6.2 environment. We looked at some different options regarding DNS. Provided a guideline for getting a basic Satellite 6.2 environment up and running. Finally we looked a bit more into the new feature remote-cmd execution. I hope you found this article useful. If you have anything to share or otherwise feedback please don’t be shy.

Happy Satelliting!

(c) 2016 Keith Tenzer