Cloud Systems Management: Satellite 6.2 Getting Started Guide

Standard

sat_6_components

Overview

In this article we will look at how to install Satellite 6.2 and configure a base environment. This article builds on a similar article I published for Satellite 6.1. In addition to installing and configuring Satellite we will also look at one of the long awaited new features remote-cmd execution.

If you are coming from Satellite 5 world then you will want to familiarize yourself with the concepts and how they apply in Satellite 6. The biggest change is around how you manage content through stages (life-cycle enviornments) but there is also a lot more.

Red_Hat_Satellite_6_Life_Cycle

source: https://www.windowspro.de/sites/windowspro.de/files/imagepicker/6/Red_Hat_Satellite_6_Life_Cycle.png

Features

Satellite 6.2 continues to build on that release and provides the following features:

  • Automated Workflows — This includes remote execution, scheduling for remote execution jobs and expanded bootstrap and provisioning options.

  • Air-gapped security and federation — Inter-Satellite sync is now available to export RPM content from one Satellite to import into another

  • Software Management Improvements — Simplified Smart Variable management is now available.

  • Capsule improvements — Users now have deeper insight into Capsule health and overall performance; Capsules are lighter-weight and can be configured to store only the content that has been requested by its clients; and a new Reference Architecture including deploying a Highly Available Satellite Capsule is now available.

  • Atomic OSTree and containers — Mirror, provision and manage RHEL Atomic hosts and content with Satellite; mirror container repositories such as Red Hat Registry, DockerHub™ and other 3rd-party sources; and Satellite provides a secure, curated point of entry for container content

  • Enhanced documentation — Both new and improved documentation is available. (https://access.redhat.com/documentation/red-hat-satellite/)

    • New Guides

      • Virtual Instance Guide (How to configure virt-who)

      • Hammer CLI Guide (How to use Satellite’s CLI)

      • Content Management Guide (How to easily manage Satellite’s content )

      • Quickstart Guide (How to get up and running quickly)

    • Improved/more user-friendly documentation

  • User Guide split to make more topical and easier to follow:

    • Server Administration Guide

    • Host Configuration Guide

    • “Cheat Sheets” available for specific topics (Hammer)

    • Updated Feature Overviews

Prerequisites

In order to install Satellite we need a subscription and of course RHEL 6 or 7.

subscription-manager register
subscription-manager list --available
subscription-manager attach --pool=934893843989289
subscription-manager repos --disable "*"

RHEL 6

subscription-manager repos --enable=rhel-6-server-rpms \
--enable=rhel-server-rhscl-6-rpms \
--enable=rhel-6-server-satellite-6.2-rpms

RHEL 7

subscription-manager repos --enable=rhel-7-server-rpms \
--enable=rhel-server-rhscl-7-rpms \
--enable=rhel-7-server-satellite-6.2-rpms

Update all packages.

# yum update -y

Add Firewall rules.

RHEL 6

# iptables -A INPUT -m state --state NEW -p udp --dport 53 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 53 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 67 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 69 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 5647 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 8140 -j ACCEPT \
&& iptables-save > /etc/sysconfig/iptables

RHEL 7

# firewall-cmd --add-service=RH-Satellite-6
# firewall-cmd --permanent --add-service=RH-Satellite-6

NTP

# yum install chrony

# systemctl start chronyd

# systemctl enable chronyd

SOS

Setting up SOS is a good idea to get faster responses from Red Hat support.

#yum install -y sos

DNS Configuration

This is only required if you want to setup an external DNS server. If you use the integrated DNS provided by Satellite you can skip this step.

[root@ipa ]# vi /etc/named.conf
 //
 // named.conf
 //
 // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
 // server as a caching only nameserver (as a localhost DNS resolver only).
 //
 // See /usr/share/doc/bind*/sample/ for example named configuration files.
 //

include "/etc/rndc.key";

controls {
 inet 192.168.0.26 port 953 allow { 192.168.38.27; 192.168.38.26; } keys { "capsule"; };
 };

options {
 listen-on port 53 { 192.168.0.26; };
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 //allow-query { localhost; };
 //forwarders {
 //8.8.8.8;
 //};
 forwarders { 8.8.8.8; 8.8.4.4; };

/*
 - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
 - If you are building a RECURSIVE (caching) DNS server, you need to enable
 recursion.
 - If your recursive DNS server has a public IP address, you MUST enable access
 control to limit queries to your legitimate users. Failing to do so will
 cause your server to become part of large scale DNS amplification
 attacks. Implementing BCP38 within your network would greatly
 reduce such attack surface
 */
 recursion yes;

dnssec-enable yes;
 dnssec-validation yes;

/* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";

pid-file "/run/named/named.pid";
 session-keyfile "/run/named/session.key";
 };

logging {
 channel default_debug {
 file "data/named.run";
 severity dynamic;
 };
 };

zone "0.168.192.in-addr.arpa" IN {
 type master;
 file "/var/named/dynamic/0.168.192-rev";
 //allow-update { 192.168.0.27; };
 update-policy {
 grant capsule zonesub ANY;
 };
 };

zone "lab" IN {
 type master;
 file "/var/named/dynamic/lab.zone";
 //allow-update { 192.168.0.27; };
 update-policy {
 grant capsule zonesub ANY;
 };

};

include "/etc/named.rfc1912.zones";
 include "/etc/named.root.key";

DNS ZONES

[root@ipa ]# cat /var/named/dynamic/
 0.168.192-rev 0.168.192-rev.old lab.zone lab.zone.jnl managed-keys.bind testdns.sh
 [root@ipa dynamic]# cat /var/named/dynamic/lab.zone
 $ORIGIN lab.
 $TTL 86400
 @ IN SOA dns1.lab. hostmaster.lab. (
 2001062501 ; serial
 21600 ; refresh after 6 hours
 3600 ; retry after 1 hour
 604800 ; expire after 1 week
 86400 ) ; minimum TTL of 1 day
 ;
 ;
 IN NS dns1.lab.
 rhevm IN A 192.168.0.20
 rhevh01 IN A 192.168.0.21
 osp8 IN A 192.168.0.22
 cf IN A 192.168.0.24
 ipa IN A 192.168.0.26
 sat6 IN A 192.168.0.27
 IN AAAA aaaa:bbbb::1
 ose-master IN A 192.168.0.25
 * 300 IN A 192.168.0.25
 ;
 ;
[root@ipa ]# cat /var/named/dynamic/0.168.192-rev
 $TTL 86400 ; 24 hours, could have been written as 24h or 1d
 $ORIGIN 0.168.192.IN-ADDR.ARPA.

@ IN SOA dns1.lab. hostmaster.lab. (
 2001062501 ; serial
 21600 ; refresh after 6 hours
 3600 ; retry after 1 hour
 604800 ; expire after 1 week
 86400 ) ; minimum TTL of 1 day

; Name servers for the zone - both out-of-zone - no A RRs required
 IN NS dns1.lab.
 ; server host definitions
 20 IN PTR rhevm.lab.
 21 IN PTR rhevh01.lab.
 22 IN PTR osp8.lab.
 24 IN PTR cf.lab.
 25 IN PTR ose3-master.lab.
 26 IN PTR ipa.lab.
 27 IN PTR sat6.lab.

Installation

Before we start with manual install, a Solution Architect and colleague of mine, Sebastian Hetze, created an automated setup script that also can integrate IDM. I strongly recommend using this script and contributing to further enhancements.

https://github.com/shetze/hammer-scripts/blob/master/sat62-setup.sh

If you are of course doing this to learn then it is definitely good to walk through manual steps so you understand concepts and what is involved.

#yum install -y satellite
With Integrated DNS
# satellite-installer --scenario satellite --foreman-admin-username admin --foreman-admin-password redhat01 --foreman-proxy-dns true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-zone example.com --foreman-proxy-dns-forwarders 8.8.8.8 --foreman-proxy-dns-reverse 0.168.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-range "192.168.0.100 192.168.0.199" --foreman-proxy-dhcp-gateway 192.168.0.1 --foreman-proxy-dhcp-nameservers 192.168.0.27 --foreman-proxy-tftp true --foreman-proxy-tftp-servername $(hostname) --capsule-puppet true --foreman-proxy-puppetca true
Without Integrated DNS
# satellite-installer --scenario satellite --foreman-admin-username admin --foreman-admin-password redhat01 --foreman-proxy-dns true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-zone sat.lab --foreman-proxy-dns-forwarders 192.168.0.26 --foreman-proxy-dns-reverse 0.168.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-range "192.168.0.225 192.168.0.250" --foreman-proxy-dhcp-gateway 192.168.0.1 --foreman-proxy-dhcp-nameservers 192.168.0.27 --foreman-proxy-tftp true --foreman-proxy-tftp-servername $(hostname) --capsule-puppet true --foreman-proxy-puppetca true

Configuration

At this point you should be able to reach the web UI using HTTPS. In this environment the url is https://sat6.lab.com. Next we need to setup the hammer CLI. Configure hammer so that we automatically pass authentication credentials.

mkdir ~/.hammer
cat > .hammer/cli_config.yml <<EOF
:foreman:
    :host: 'https://sat6.lab/'
    :username: 'admin'
    :password: 'redhat01'

EOF

External DNS Configuration

If you setup external DNS then you need to allow Satellite server to update DNS records on your external DNS server.

# vi /etc/foreman-proxy/settings.d/dns.yml
---
:enabled: true
:dns_provider: nsupdate
:dns_key: /etc/rndc.key
:dns_server: 192.168.38.2
:dns_ttl: 86400
# systemctl restart foreman-proxy

Register Satellite Server in Red Hat Network (RHN).

SAT6_REGISTER

Assign subscriptions to the Satellite server and download manifest from RHN.

SAT_ATTACHED_SUB

Note: In the next section we will be using the hammer CLI to configure Satellite. In this environment we are using the organization “Default Organization”, you would probably change this to a more specific organization name. If so you need to first create a new organization.

Upload manifest file to Satellite server.

#hammer subscription upload --organization "Default Organization" --file /root/manifest_d586388b-f556-4623-b6a0-9f76857bedbc.zip

Create a subnet in Satellite 6 under Infrastructure->Subnet. In this environment the subnet is 192.168.122.0/24 and we are using external D.

Screenshot from 2016-08-03 14:33:06

Enable basic repositories.

At minimum you need RHEL Server, Satellite Tools and RH Common.

#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --releasever='7Server' --name 'Red Hat Enterprise Linux 7 Server (RPMs)'
#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --releasever='7Server' --name 'Red Hat Enterprise Linux 7 Server (Kickstart)'
#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --name 'Red Hat Satellite Tools 6.2 (for RHEL 7 Server) (RPMs)'

#hammer repository-set enable --organization "Default Organization" --product 'Red Hat Enterprise Linux Server' --basearch='x86_64' --name 'Red Hat Enterprise Linux 7 Server - RH Common RPMs x86_64 7Server'

Enable EPEL repository for 3rd party packages.

#wget -q https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7  -O /root/RPM-GPG-KEY-EPEL-7
#hammer gpg create --key /root/RPM-GPG-KEY-EPEL-7  --name 'GPG-EPEL-7' --organization "Default Organization"

Create a new product for the EPEL repository. In Satellite 6 products are a groupings of external content outside of RHN. Products can contain RPM repositories, Puppet modules or container images.

#hammer product create --name='EPEL 3rd Party Packages' --organization "Default Organization" --description 'EPEL 3rd Party Packages'
#hammer repository create --name='EPEL 7 - x86_64' --organization "Default Organization" --product='EPEL 3rd Party Packages' --content-type='yum' --publish-via-http=true --url=http://dl.fedoraproject.org/pub/epel/7/x86_64/ --checksum-type=sha256 --gpg-key=GPG-EPEL-7

 Synchronize the repositories. This will take a while as all of the RPM packages will be downloaded. Note: you can also use the –async option to run tasks in parallel.

#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Enterprise Linux 7 Server Kickstart x86_64 7Server'
#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Satellite Tools 6.2 for RHEL 7 Server RPMs x86_64'
#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server'

#hammer repository synchronize --async --organization "Default Organization" --product 'Red Hat Enterprise Linux Server'  --name 'Red Hat Enterprise Linux 7 Server - RH Common RPMs x86_64 7Server'

#hammer repository synchronize --async --organization "$ORG" --product 'EPEL 3rd Party Packages  --name  'EPEL 7 - x86_64'

Create life cycles for development and production.

#hammer lifecycle-environment create --organization "Default Organization" --description 'Development' --name 'DEV' --label development --prior Library
#hammer lifecycle-environment create --organization "Default Organization" --description 'Production' --name 'PROD' --label production --prior 'DEV'

Create content view for RHEL 7 base.

#hammer content-view create --organization "Default Organization" --name 'RHEL7_base' --label rhel7_base --description 'Core Build for RHEL 7'

#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'Red Hat Enterprise Linux Server' --repository 'Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server'
#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'Red Hat Enterprise Linux Server' --repository 'Red Hat Satellite Tools 6.2 for RHEL 7 Server RPMs x86_64'

#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'Red Hat Enterprise Linux Server' --repository 'Red Hat Enterprise Linux 7 Server - RH Common RPMs x86_64 7Server'

#hammer content-view add-repository --organization "Default Organization" --name 'RHEL7_base' --product 'EPEL 3rd Party Packages'  --repository  'EPEL 7 - x86_64'

Publish and promote content view to the environments.

#hammer content-view publish --organization "Default Organization" --name RHEL7_base --description 'Initial Publishing'
#hammer content-view version promote --organization "Default Organization" --content-view RHEL7_base --to-lifecycle-environment DEV
#hammer content-view version promote --organization "Default Organization" --content-view RHEL7_base --to-lifecycle-environment PROD

Add activation keys for both stage environments.

#hammer activation-key create --organization "Default Organization" --description 'RHEL7 Key for DEV' --content-view 'RHEL7_base' --unlimited-hosts --name ak-Reg_To_DEV --lifecycle-environment 'DEV'
#hammer activation-key create --organization "Default Organization" --description 'RHEL7 Key for PROD' --content-view 'RHEL7_base' --unlimited-hosts --name ak-Reg_To_PROD --lifecycle-environment 'PROD'

Add subscriptions to activation keys.

Screenshot from 2016-08-03 12:53:41

Get medium id needed for hostgroup creation.

#hammer medium list
---|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------
ID | NAME | PATH 
---|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------
1 | CentOS mirror | http://mirror.centos.org/centos/$version/os/$arch 
8 | CoreOS mirror | http://$release.release.core-os.net 
2 | Debian mirror | http://ftp.debian.org/debian 
9 | Default_Organization/Library/Red_Hat_Server/Red_Hat_Enterprise_Linux_7_Server... | http://sat6.lab/pulp/repos/Default_Organization/Library/content/dist/rhel/ser...
4 | Fedora Atomic mirror | http://dl.fedoraproject.org/pub/alt/atomic/stable/Cloud_Atomic/$arch/os/ 
3 | Fedora mirror | http://dl.fedoraproject.org/pub/fedora/linux/releases/$major/Server/$arch/os/ 
5 | FreeBSD mirror | http://ftp.freebsd.org/pub/FreeBSD/releases/$arch/$version-RELEASE/ 
6 | OpenSUSE mirror | http://download.opensuse.org/distribution/$version/repo/oss 
7 | Ubuntu mirror | http://archive.ubuntu.com/ubuntu 
---|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------

Create a host group. A host group is a foreman construct and is used for automation of provisioning parameters. A host is provisioned based on its host group. The host group contains kickstart/provisioning templates, OS information, network information, activation keys, parameters, puppet environment and if virtual a compute profile. Note: you will need to change the hostname sat6.lab.com.

#hammer hostgroup create --architecture x86_64 --content-source-id 1 --content-view RHEL7_base --domain lab --lifecycle-environment DEV --locations 'Default Location' --name RHEL7_DEV_Servers --organizations "Default Organization" --puppet-ca-proxy sat6.lab --puppet-proxy sat6.lab --subnet VLAN_0 --partition-table 'Kickstart default' --operatingsystem 'RedHat 7.2' --medium-id 9

Add compute resource for RHEV.

# hammer compute-resource create --provider Ovirt --name RHEV --url https://rhevm.lab/api --organizations "Default Organization" --locations 'Default Location' --user admin@internal --password redhat01

Satellite 6 Bootstrapping

Satellite bootstrapping is the process for taking an already provisioned system and attaching it to the Satellite server. The minimum process is outlined below:

Install Katello package from Satellite server.

#rpm -Uvh http://sat6.lab.com/pub/katello-ca-consumer-latest.noarch.rpm

Subscribe using activation key.

#subscription-manager register --org="Default_Organization" --activationkey="DEV_CORE"

Enable Satellite tools repository and install katello agent.

#yum -y install --enablerepo rhel-7-server-satellite-tools-6.2-rpms katello-agent

In addition to get full functionality you would also need to install and configure Puppet. Many customers are also looking for a solution that would gracefully move a system attached to Satellite 5 into Satellite 6. As such there is a bootstrapping script I can recommend from Evgeni Golov (one of our top Satellite Consultants @Red Hat):

https://github.com/buildout/buildout/blob/master/bootstrap/bootstrap.py

Remote Command Execution

A new feature many Satellite 5 customers have been waiting for is remote-cmd execution. This feature allows you to run and schedule commands on clients connected to Satellite 6 server. You can think of this as poor-man’s Ansible.

Ensure remote-cmd execution is configured for Satellite capsule.

Screenshot from 2016-08-23 12:47:15

Note: If you aren’t using provisioning template “Satellite Kickstart Default” and you upgraded from Satellite 6.1, you will need to re-clone the “Satellite Kickstart Default” template and apply your changes. A snippet was added to “Satellite Kickstart Default” in order to automatically configure foreman-proxy ssh keys.

Screenshot from 2016-08-23 12:53:18.png

For systems that are already provisioned you need to copy foreman-proxy ssh key.

# ssh-copy-id -i /usr/share/foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@192.168.122.210

Run “ls” command for a given client using remote-cmd execution.

# hammer job-invocation create --async --inputs "command=ls -l /root" --search-query name=client1.lab.com --job-template "Run Command - SSH Default"

Run command using input file or script.

# hammer job-invocation create --async --input-files command=/root/script.sh --search-query name=client1.lab.com --job-template "Run Command - SSH Default"

Run command on multiple hosts.

# hammer job-invocation create --async --inputs "command=ls -l /root" --search-query "name ~ client1.lab.com|client2.lab.com" --job-template "Run Command - SSH Default"

List jobs that are running, completed or failed.

# hammer job-invocation list
---|---------------------|-----------|---------|--------|---------|-------|------------------------
ID | DESCRIPTION | STATUS | SUCCESS | FAILED | PENDING | TOTAL | START 
---|---------------------|-----------|---------|--------|---------|-------|------------------------
4 | Run ls -l /root | succeeded | 1 | 0 | 0 | 1 | 2016-08-23 11:10:50 UTC
3 | Run puppet agent -t | succeeded | 1 | 0 | 0 | 1 | 2016-08-23 10:43:20 UTC
2 | Run puppet agent -t | failed | 0 | 1 | 0 | 1 | 2016-08-23 10:37:18 UTC
1 | Run puppet agent -t | failed | 0 | 1 | 0 | 1 | 2016-08-23 10:19:41 UTC
---|---------------------|-----------|---------|--------|---------|-------|------------------------

Show details of a completed job.

# hammer job-invocation info --id 3
ID: 4
Description: Run ls -l /root
Status: succeeded
Success: 2
Failed: 0
Pending: 0
Total: 2
Start: 2016-08-23 11:34:05 UTC
Job Category: Commands
Mode: 
Cron line: 
Recurring logic ID: 
Hosts: 
 - client1.lab.com
 - client2.lab.com

Show output from a completed command for a given host. Satellite will show stdout and return code from command.

# hammer job-invocation output --host client1.lab.com --id 4
total 36
-rw-------. 1 root root 4256 Aug 23 10:34 anaconda-ks.cfg
-rw-r--r--. 1 root root 21054 Aug 23 10:34 install.post.log
-rw-r--r--. 1 root root 54 Aug 23 10:33 install.postnochroot.log
Exit status: 0

Summary

In this article we learned how to deploy Satellite 6.2 environment. We looked at some different options regarding DNS. Provided a guideline for getting a basic Satellite 6.2 environment up and running. Finally we looked a bit more into the new feature remote-cmd execution. I hope you found this article useful. If you have anything to share or otherwise feedback please don’t be shy.

Happy Satelliting!

(c) 2016 Keith Tenzer

 

 

 


Using github to version some files

Standard

Overview

I want to store some config files, scripts or Ansible playbooks  on a central web service to have

  • some versioning
  • know where the newest truth resides

I’m not a developer and don’t do lot of collaborative coding in a big spread team. So  i manly need to add files, change files and consume them.

Up to now i know git is “very easy” but git has many different commands to achieve similar things. “pull, fetch, merge, clone” are all downloading content to a local repository or maybe they do  part of what you need. As i want to manage real brain work output (written scripts, playbooks or config files), i prefer to not accidentally loose some content.

I found answer 12 in [1] very useful.

Solution

I’m keeping together all files belonging to the same Project or Subject in one github repository.

Creating a new repository

I do have an account on https://github.com/ and logged in through the web frontend. Clicking “+” on the right upper corner gets me a new repository. I gave the repository a Name e.g. “myrepo”.

Note: I try to find names representing the proejctide or the environment the content is for. “myrepo” is only goodas placeholder inside this documentation.

I fill readme.md with some reasonable content and may also file a lisence.md file.

That’s all.

Editing and creating new Files – first time

To edit, change or add files to your repository you need to clone the repository to your local filesystem. Change files there , commit them and push them back to the original repository.

I define some global user information:

$ git config --global user.name "mschreie"
$ git config --global user.email mschreie@redhat.com
$ ### cerating a starting-Directory where my (cloned) repositories reside
$ mkdir git
$ cd $_
$ git clone https://github.com/mschreie/myrepo
$ cd myrepo/

If you want to add a new file

… copy file from some other place or create a new one in this directory and add all files not added yet, e.g.

$ copy /etc/ansible/hosts.cfg .
$ git add .

If you want to edit a file

just  do so, e.g.

$ vi <file>

 In both cases

you need to commit your changes and push them to the central/original repository

$ git commit -m "some relevant comment"
$ git status -s 
$ git push origin master

Editing and creating new Files – second time

If you edited some files on the local  server before. You already have a git-directory and a local clone of the git-Repo.

You would then need to update your repo, do the changes, commit and push.

To update your local clone repository with the newest content of your github repository:

$ git pull

Consuming files

You could clone the whole repository and make use of all files, but you can also download distinct files through

wget https://github.com/mschreie/myrepo/raw/master/file

Pleas note  “raw” in the URL. If you navigate through the web front end to the file you are looking for, you will get exactly the same URL but as “blob” with more fancy layout around.

Conclusion

We now are able to have one authoritative space, where the self created files reside. You are able to add new files, change files and consume what you created. You are also able to organize the files in different repositories.

git can do much more – it’s all i need in the first place from a non-developer perspective.

 

[1] http://stackoverflow.com/questions/9577968/how-to-upload-files-on-github

 


OpenShift v3: Basic Release Deployment Scenarios

Standard

3d small people - Males with four puzzle together

source: http://snsoftwarelabs.com/

Overview

One of the hardest things companies struggle with today is release management. Of course many methodologies and even more tools or technologies exist, but how do we bring everything together and work across functional boundaries of an organization? A product release involves everyone in the company not just a single team. Many companies struggle with this and the result is a much slower innovation cycle. In the past this used to be something that at least wasn’t a deal breaker. Unfortunately that is no longer the case. Today companies live and die by their ability to not only innovate but release innovation. I would say innovating is the easy part, the ability to provide those innovations in a controlled fashion through products and services is the real challenge.

Moving to micro-services architectures and container based technologies such as Docker have simplified or streamlined many technological aspects. Certainly at least providing light at the end of the tunnel, but until OpenShift there wasn’t a platform to bring it all together, allowing development and operations teams to work together while still maintaining their areas of focus or control. In this article we will look at three scenarios for handling application deployments within OpenShift that involve both operations and development. Each scenario builds on the other and should give you a good idea of the new possibilities with OpenShift. Also keep in mind, these are basic scenarios and we are just scratching the surface so this should be viewed as a starting point for doing application deployments or release management in the new containerized world.

Scenario 1: Development leveraging customized image from Operations

Typically operations teams will want to control application runtime environment. Ensure that the application runtime environment meets all security policies, provides needed capabilities and is updated on regular basis.

Development teams want to focus on innovation through application functionality, stability and capabilities.

OpenShift allows both teams to focus on their core responsibility while also providing a means to integrate inputs/outputs of various teams into a end-to-end release.

There are many ways to integrate DevOps teams in OpenShift. One simple way is by separating development and operations into different projects and allowing development to their application runtime environment from operations. In this scenario we will see how to do that using a basic ruby hello-world application as example.

Create Projects

Create operations and development projects for our ruby application.

# oc login -u admin
# oc new-project ruby-ops
# oc new-project ruby-dev

Setup Users

Create a user for development and operations.

# htpasswd /etc/origin/master/htpasswd dev
# htpasswd /etc/origin/master/htpasswd ops

Create ops and dev users

# htpasswd /etc/origin/master/htpasswd dev
# htpasswd /etc/origin/master/htpasswd ops

Enable permissions.

Create three groups that allow operations to edit the ruby-ops project, allow development to view the ruby-ops project and also edit the ruby-dev project. In addition the ruby-dev project needs permission to pull images from the ruby-ops project.

Create groups and add users to correct groups.

# oadm groups new ops-edit && oadm groups new dev-view && oadm groups new dev-edit
# oadm groups add-users ops-edit ops && oadm groups add-users dev-view dev && \
oadm groups add-users dev-edit dev

Associate groups to projects and setup pull permissions to allow ruby-dev to pull images from ruby-ops.

# oadm policy add-role-to-group edit ops-edit -n ruby-ops && \
# oadm policy add-role-to-group view dev-view -n ruby-ops && \
# oadm policy add-role-to-group edit dev-edit -n ruby-dev && \
# oadm policy add-role-to-group system:image-puller system:serviceaccounts:ruby-dev -n ruby-ops

Operations Ruby Environment

As ops user create a ruby runtime image using application test code.

# oc login -u ops
# oc project ruby-ops
# oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git

Application requires database and service name called database.

# oc new-app mysql-ephemeral -p DATABASE_SERVICE_NAME=database
# oc env dc database --list | oc env dc ruby-hello-world -e -

Development Ruby Environment

As dev user pull the operations ruby runtime image and build using latest code from different Github branch or project.

# oc login -u dev
# oc project ruby-dev
# oc new-app ruby-ops/ruby-22-centos7:latest~https://github.com/ktenzer/ruby-hello-world.git

Application requires database and service name called database.

# oc new-app mysql-ephemeral -p DATABASE_SERVICE_NAME=database
# oc env dc database --list | oc env dc ruby-hello-world -e -

Scenario 2: Development to Production Promotion

Once development has an application version with required functionality and having passed all tests it can be promoted to other environments. Typically that would be qa, test, integration and eventually production. In this simple example using the ticket-monster application we will promote directly from development to production.

Using similar concepts as describe in scenario 1 this technique relies on production pulling appropriate images from development. In this scenario however we will create deployment configs and within setup a trigger that when an image with particular name/tag is updated in development, promotion to production will occur automatically. Scenario 1 shows how to do this manually, here we do it automatically and in scenario 3 we will see how to do promotion using jenkins that enables building complex pipelines with approval processes.

Create projects and setup pull permissions

# oc new-project ticket-monster-dev
# oc new-project ticket-monster-prod
# oc policy add-role-to-group system:image-puller system:serviceaccounts:ticket-monster-prod -n ticket-monster-dev

Create ticket monster template for development

# vi monster.yaml
kind: Template
apiVersion: v1
metadata:
 name: monster
 annotations:
 tags: instant-app,javaee
 iconClass: icon-jboss
 description: |
 Ticket Monster is a moderately complex application that demonstrates how
 to build modern applications using JBoss web technologies

parameters:
- name: GIT_URI
 value: git://github.com/kenthua/ticket-monster-ose
- name: MYSQL_DATABASE
 value: monster
- name: MYSQL_USER
 value: monster
- name: MYSQL_PASSWORD
 from: '[a-zA-Z0-9]{8}'
 generate: expression


objects:
- kind: ImageStream
 apiVersion: v1
 metadata:
 name: monster

- kind: BuildConfig
 apiVersion: v1
 metadata:
 name: monster
 spec:
 triggers:
 - type: Generic
 generic:
 secret: secret
 - type: ImageChange
 - type: ConfigChange
 strategy:
 type: Source
 sourceStrategy:
 from:
 kind: ImageStreamTag
 name: jboss-eap64-openshift:latest
 namespace: openshift
 source:
 type: Git
 git:
 uri: ${GIT_URI}
 ref: master
 output:
 to:
 kind: ImageStreamTag
 name: monster:latest

- kind: DeploymentConfig
 apiVersion: v1
 metadata:
 name: monster
 spec:
 replicas: 1
 selector:
 deploymentConfig: monster
 template:
 metadata:
 labels:
 deploymentConfig: monster
 name: monster
 spec:
 containers:
 - name: monster
 image: monster
 ports:
 - name: http
 containerPort: 8080
 - name: jolokia
 containerPort: 8778
 - name: debug
 containerPort: 8787
 readinessProbe:
 exec:
 command:
 - /bin/bash
 - -c
 - /opt/eap/bin/readinessProbe.sh
 env:
 - name: DB_SERVICE_PREFIX_MAPPING
 value: monster-mysql=DB
 - name: TX_DATABASE_PREFIX_MAPPING
 value: monster-mysql=DB
 - name: DB_JNDI
 value: java:jboss/datasources/MySQLDS
 - name: DB_DATABASE
 value: ${MYSQL_DATABASE}
 - name: DB_USERNAME
 value: ${MYSQL_USER}
 - name: DB_PASSWORD
 value: ${MYSQL_PASSWORD}
 - name: JAVA_OPTS
 value: "-Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true -Djboss.modules.policy-permissions=true"
 - name: DEBUG
 value: "true"
 triggers:
 - type: ImageChange
 imageChangeParams:
 automatic: true
 containerNames:
 - monster
 from:
 kind: ImageStream
 name: monster

- kind: DeploymentConfig
 apiVersion: v1
 metadata:
 name: monster-mysql
 spec:
 triggers:
 - type: ImageChange
 imageChangeParams:
 automatic: true
 containerNames:
 - monster-mysql
 from:
 kind: ImageStreamTag
 name: mysql:latest
 namespace: openshift
 replicas: 1
 selector:
 deploymentConfig: monster-mysql
 template:
 metadata:
 labels:
 deploymentConfig: monster-mysql
 name: monster-mysql
 spec:
 containers:
 - name: monster-mysql
 image: mysql
 ports:
 - containerPort: 3306
 env:
 - name: MYSQL_USER
 value: ${MYSQL_USER}
 - name: MYSQL_PASSWORD
 value: ${MYSQL_PASSWORD}
 - name: MYSQL_DATABASE
 value: ${MYSQL_DATABASE}

- kind: Service
 apiVersion: v1
 metadata:
 name: monster
 spec:
 ports:
 - name: http
 port: 8080
 selector:
 deploymentConfig: monster

- kind: Service
 apiVersion: v1
 metadata:
 name: monster-mysql
 spec:
 ports:
 - port: 3306
 selector:
 deploymentConfig: monster-mysql

- kind: Route
 apiVersion: v1
 metadata:
 name: monster
 spec:
 to:
 name: monster
# oc create -n openshift -f monster.yaml

Create template for ticket monster production environment

Below trigger will only deploy production environment when the image stream in development is tagged with monster:prod.

# vi monster-prod.yaml
kind: Template
apiVersion: v1
metadata:
 name: monster-prod
 annotations:
 tags: instant-app,javaee
 iconClass: icon-jboss
 description: |
 Ticket Monster is a moderately complex application that demonstrates how
 to build modern applications using JBoss web technologies. This template
 is for "production deployments" of Ticket Monster.

parameters:
- name: MYSQL_DATABASE
 value: monster
- name: MYSQL_USER
 value: monster
- name: MYSQL_PASSWORD
 from: '[a-zA-Z0-9]{8}'
 generate: expression

objects:
- kind: DeploymentConfig
 apiVersion: v1
 metadata:
 name: monster
 spec:
 replicas: 3
 selector:
 deploymentConfig: monster
 template:
 metadata:
 labels:
 deploymentConfig: monster
 name: monster
 spec:
 containers:
 - name: monster
 image: monster
 ports:
 - name: http
 containerPort: 8080
 - name: jolokia
 containerPort: 8778
 readinessProbe:
 exec:
 command:
 - /bin/bash
 - -c
 - /opt/eap/bin/readinessProbe.sh
 env:
 - name: DB_SERVICE_PREFIX_MAPPING
 value: monster-mysql=DB
 - name: TX_DATABASE_PREFIX_MAPPING
 value: monster-mysql=DB
 - name: DB_JNDI
 value: java:jboss/datasources/MySQLDS
 - name: DB_DATABASE
 value: ${MYSQL_DATABASE}
 - name: DB_USERNAME
 value: ${MYSQL_USER}
 - name: DB_PASSWORD
 value: ${MYSQL_PASSWORD}
 - name: JAVA_OPTS
 value: "-Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true -Djboss.modules.policy-permissions=true"
 triggers:
 - type: ImageChange
 imageChangeParams:
 automatic: true
 containerNames:
 - monster
 from:
 kind: ImageStreamTag
 name: monster:prod
 namespace: ticket-monster-dev

- kind: DeploymentConfig
 apiVersion: v1
 metadata:
 name: monster-mysql
 spec:
 triggers:
 - type: ImageChange
 imageChangeParams:
 automatic: true
 containerNames:
 - monster-mysql
 from:
 kind: ImageStreamTag
 name: mysql:latest
 namespace: openshift
 replicas: 1
 selector:
 deploymentConfig: monster-mysql
 template:
 metadata:
 labels:
 deploymentConfig: monster-mysql
 name: monster-mysql
 spec:
 containers:
 - name: monster-mysql
 image: mysql
 ports:
 - containerPort: 3306
 env:
 - name: MYSQL_USER
 value: ${MYSQL_USER}
 - name: MYSQL_PASSWORD
 value: ${MYSQL_PASSWORD}
 - name: MYSQL_DATABASE
 value: ${MYSQL_DATABASE}

- kind: Service
 apiVersion: v1
 metadata:
 name: monster
 spec:
 ports:
 - name: http
 port: 8080
 selector:
 deploymentConfig: monster

- kind: Service
 apiVersion: v1
 metadata:
 name: monster-mysql
 spec:
 ports:
 - port: 3306
 selector:
 deploymentConfig: monster-mysql

- kind: Route
 apiVersion: v1
 metadata:
 name: monster
 spec:
 to:
 name: monster
# oc create -n openshift -f monster-prod.yaml

Deploy ticket-monster development environment

Using the UI you can now choose template monster (monster).

monster

Deploy ticket-monster production environment

Deploy ticket-monster production template (monster-prod).

monster-prod

You will notice in development the environment is built and you can access the application using the service URL: http://monster-ticket-monster-dev.apps.lab

app

If you look at production environment, the database is running and a service endpoint exists however ticket monster application is not running. The production template is as mentioned setup to pull images from development automatically that have a certain name/tag. The production environment also runs scale of 4 for application where development only has scale of 1.

Promote development application version to production

Get the image stream pull spec.

# oc get is monster -o yaml
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
 openshift.io/image.dockerRepositoryCheck: 2016-08-09T13:37:47Z
 creationTimestamp: 2016-08-09T13:14:53Z
 generation: 7
 name: monster
 namespace: ticket-monster-dev
 resourceVersion: "107170"
 selfLink: /oapi/v1/namespaces/ticket-monster-dev/imagestreams/monster
 uid: 42740a3d-5e33-11e6-aa8d-001a4ae42e01
spec:
 tags:
 - annotations: null
 from:
 kind: ImageStreamImage
 name: monster@sha256:3a48a056a58f50764953ba856d90eba73dd0dfdee10b8cb6837b0fd9461da7f9
 generation: 7
 importPolicy: {}
 name: prod
status:
 dockerImageRepository: 172.30.139.50:5000/ticket-monster-dev/monster
 tags:
 - items:
 - created: 2016-08-09T13:26:04Z
 dockerImageReference: 172.30.139.50:5000/ticket-monster-dev/monster@sha256:3a48a056a58f50764953ba856d90eba73dd0dfdee10b8cb6837b0fd9461da7f9
 generation: 1
 image: sha256:3a48a056a58f50764953ba856d90eba73dd0dfdee10b8cb6837b0fd9461da7f9
 tag: latest

Once you have the image stream pull specification (above in bold), tag the image stream monster:prod.

# oc tag monster@sha256:3a48a056a58f50764953ba856d90eba73dd0dfdee10b8cb6837b0fd9461da7f9 monster:prod

You can verify that the image stream in fact has a tag prod.

# oc get is
NAME DOCKER REPO TAGS UPDATED
monster 172.30.139.50:5000/ticket-monster-dev/monster prod,latest 2 minutes ago

As soon as the image stream in ticket-monster-dev is tagged with monster:prod it will be deployed to production. As mentioned above the scale in production is 4 instead of 1.

ticket-monster-prod

Scenario 3: AB Deployment using Jenkins

In this scenario we will look at how to do a simple AB deployment using Jenkins. This scenario builds off what we learned in scenario 1 and 2. In this scenario we will create a slightly more complex environment with three environments: development, integration and production. We will also build two version of our application, v1 and v2 in dev environment. Using Jenkins we will show how to promote v2 of application from development to integration to production. Finally we will show how to rollback production from v2 to v1 of application. I want to thank a colleague, Torben Jaeger who created a lot of the below content.

Create projects

# oc new-project dev && \
oc new-project int && \
oc new-project prod

Setup pull permissions

Allow int environment to pull images from dev environment and prod to pull images from both dev and int environments.

# oc policy add-role-to-group system:image-puller system:serviceaccounts:int -n dev
# oc policy add-role-to-group system:image-puller system:serviceaccounts:prod -n int
# oc policy add-role-to-group system:image-puller system:serviceaccounts:prod -n dev

Setup jenkins in development environment

Using the UI go to the dev project, select add to project and choose jenkins-ephemeral.

jenkins

Clone Github repository

# git clone https://github.com/ktenzer/openshift-demo.git

Update auth tokens for Jenkins

Get Auth token for OpenShift API. This is needed to allow Jenkins to access OpenShift environment. You need to update the auth tokens.

# oc login -u admin
# oc whoami -t
DMzhKyEN87DZiDYV6i1d8L8NL2e6gFVFPpT5FnozKtU

Update the below jenkins jobs and replace authToken and destinationAuthToken using your token from above.

# ls jenkins-jobs/
promote-int.xml promote-prod.xml rel-v2.xml rollback-prod.xml

Setup all three environements

Using templates you updated above with your auth token create all environments.

# cd openshift-demo
oc create -f template.json -n dev
# oc create -f acceptance.template.json -n int
# oc create -f production.template.json -n prod

Deploy nodejs hello-world application in dev environment

Integration is setup to pull from development and production is setup to pull from integration.

# oc new-app -f template.json -n dev

This template creates two versions of the application: v1-ab and v2-ab.

dev

Test applications in development

Connecting to v1-ab via http or curl should print “Hello World!”. Connecting to v2-ab via http or curl should print “Hello World, welcome to Frankfurt!”.

Deploy v1 from development to integration

Deploy v1 to int using tags. A trigger is set on int (acceptance) template to deploy when the image acceptance:latest is updated. You should see that a pod is started and running version v1 of application.

oc tag v1:latest v1:1.0 -n dev
oc tag dev/v1:1.0 acceptance:v1 -n int
oc tag acceptance:v1 acceptance:latest -n int

Deploy v1 from integration to production

Deploy v1 to prod using tags. A trigger is set on prod template to deploy when the image production:latest is updated.

oc tag int/acceptance:v1 production:v1 -n prod
oc tag production:v1 production:latest -n prod

Notice the application scale is 4 and the version of code running in prod is v1. Again this is all defined in the template we provided.

prod-dep

Configure Jenkins

You have seen how to promote application versions manually using image tags and triggers in the templates. Next lets get a bit more sophisticated and orchestrate the same things through jenkins using the OpenShift plugin. This time we will promote the version v2 through int and prod environments.

There are jenkins jobs to promote a release from development to integration and also from integration to production. There is a job to build a new version of the v2 application and finally perform a rollback in production.

# curl -k -u admin:password -XPOST -d @jenkins-jobs/rel-v2.xml 'https://jenkins-dev.apps.lab/createItem?name=rel-v2' -H "Content-Type: application/xml"
# curl -k -u admin:password -XPOST -d @jenkins-jobs/promote-int.xml 'https://jenkins-dev.apps.lab/createItem?name=promote-int' -H "Content-Type: application/xml"
# curl -k -u admin:password -XPOST -d @jenkins-jobs/promote-prod.xml 'https://jenkins-dev.apps.lab/createItem?name=promote-prod' -H "Content-Type: application/xml"
# curl -k -u admin:password -XPOST -d @jenkins-jobs/rollback-prod.xml 'https://jenkins-dev.apps.lab/createItem?name=rollback-prod' -H "Content-Type: application/xml"

Once you create the jobs you can login to jenkins using the url with user: admin password: password.

jenkins_ippes

Optional Step: make change in v2 code

In order for this to work you need to fork below nodejs-ex Github repository and update the template.json with your github URL. You would then need to redeploy the dev environment using new template.

 "source": {
 "type": "Git",
 "git": {
 "uri": "https://github.com/ktenzer/nodejs-ex.git",
 "ref": "master"
 },
# git clone https://github.com/ktenzer/nodejs-ex
# cd nodejs-ex

Checkout v2 branch and make commit.

# git checkout v2
# vi index.html
Hello World, welcome to Munich!

Commit changes.

# git commit -a -m "updated to munich"
# git push origin v2

Run rhel-v2 build from Jenkins

rhel-v2-ex

If you cloned the nodejs-ex repository then you should see your changes in v2 by clicking the URL or using curl.

Promote v2 from development to integration

Run promote-int build from Jenkins. You will see a new pod is started with v2 code next to v1 pod.

promote-int-ex

Promote v2 from integration to production

Here we will take a deeper look into what is actually happening under the hood.

Using curl you can see how the application is switched from v1 to v2.

# for i in {1..10000};do curl prod-ab.apps.lab; sleep 1;done

Run promote-prod build from Jenkins.

promote-prod-ex

 

The deployment is started and a v2 pods are started next to v1 pods. At this point application service still is set to v1.

prod1

Two v2 pods are running and readiness checks are done to ensure v2 application is responding. The v1 application pods are set to scale down.

prod2

All four v2 pods are running and the v1 pods are scaling to 0. We can see that both v1 and v2 are responding to requests while we are in transition.

prod3

Only v2 pods are running and the AB deployment of v2 is complete.

prod4

Now lets assume we aren’t satisfied with v2 and want to do a rollback of v1.

Run rollback-prod build from Jenkins.

rollback-ex

Here we observe the same thing as in our previous step, only we are switching from v2 to v1.

prod7

Summary

In this article we have seen how OpenShift simplifies application deployments and integrates with tools such as Jenkins that enable release management. You have many options using OpenShift and we have only really begun to scratched the surface. With Jenkins you can create very complex build pipelines that allow you to not only control but also visualize your application deployment processes. We have looked at one type of common deployment, the AB deployment but there are also other deployment types such as blue-green or canary. In a future article I will take a look at additional deployment models in OpenShift. If you have feedback or experience then please share your thoughts. I hope you have found this article useful and informative.

Happy OpenShifting!

(c) 2016 Keith Tenzer


OpenShift Enterprise 3.2: all-in-one Lab Environment

Standard

Screenshot from 2016-08-04 14:40:07

Overview

In this article we will setup a OpenShift Enterprise 3.2 all-in-one configuration. We will also setup the integration with CloudForms that allows additional management of OpenShift environments.

OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system a load balancer or ha-proxy won’t be configured. If you would like to read more about OpenShift I can recommend the following:

Prerequisites

Configure a VM with following:

  • RHEL 7.2
  • 2 CPUs
  • 4096 RAM
  • 30GB disk for OS
  • 25GB disk for docker images
# subscription-manager repos --disable="*"
# subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.2-rpms"
# yum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion
# yum update -y
# yum install -y atomic-openshift-utils
# systemctl reboot
# yum install -y docker-1.10.3
# vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16'
# cat <<EOF > /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdb
VG=docker-vg
EOF
# docker-storage-setup
# systemctl enable docker
# systemctl start docker
# ssh-keygen
# ssh-copy-id -i /root/.ssh/id_rsa-pub ose3-master.lab.com

DNS Setup

DNS is a requirement for OpenShift Enterprise. In fact most issues you may run into are a result of not having a properly working DNS environment. For OpenShift you can either use dnsmasq or bind. I recommend using bind but in this article I will cover both options.

DNSMASQ

A colleague Ivan Mckinely was nice enough to create an ansible playbook for deploying dnsmasq. To deploy dnsmasq run following steps on OpenShift master.

# git clone https://github.com/ivanthelad/ansible-aos-scripts.git
#cd ansible-aos-scripts

Edit inventory file and set dns to IP of the system that should be providing DNS. Also ensure nodes and masters have correct IPs for your OpenShift servers. In our case 192.168.122.60 is master, node and DNS.

#vi inventory

[dns]
192.168.122.60
[nodes]
192.168.122.60
[masters]
192.168.122.60

Configure dnsmasq and add wildcard DNS so all hosts with

# vi playbooks/roles/dnsmasq/templates/dnsmasq.conf

strict-order
domain-needed
local=/lab.com/
bind-dynamic
resolv-file=/etc/resolv.conf.upstream
no-hosts
address=/.cloudapps.lab.com/192.168.122.60
address=/ose3-master.lab.com/192.168.122.60
log-queries

Ensure all hosts you want in DNS are also in /etc/hosts. The dnsmasq service reads /etc/hosts upon startup so all entries in hosts file can be queried through DNS.

#vi /etc/hosts

192.168.122.60  ose3-master.lab.com     ose3-master

Configure ssh on DNS host

#ssh-keygen
#ssh-copy-id -i ~/.ssh/id_rsa.pub ose3-master.lab.com

Install dnsmasq via ansible

# ansible-playbook -i inventory playbooks/install_dnsmas.yml

If you need to make changes you can edit the /etc/dnsmasq.conf file and restart dnsmasq service. Below is a sample dnsmasq.conf.

# vi /etc/dnsmasq.conf
strict-order
domain-needed
local=/example.com/
bind-dynamic
resolv-file=/etc/resolv.conf.upstream
no-hosts
address=/.apps.lab.com/192.168.122.60
address=/ose3-master.lab.com/192.168.122.60
address=/kubernetes.default.svc/192.168.122.60
log-queries

NAMED

Install DNS tools and utilities.

# yum -y install bind bind-utils
# systemctl enable named
# systemctl start named

Set firewall rules using iptables.

# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT
# iptables -A INPUT -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT

Save the iptables Using.

# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]

Note: If you are using firewalld you can just enable service DNS using firwall-cmd utility.

Example of zone file for lab.com.

vi /var/named/dynamic/lab.com.zone 

$ORIGIN lab.com.
$TTL 86400
@ IN SOA dns1.lab.com. hostmaster.lab.com. (
 2001062501 ; serial
 21600 ; refresh after 6 hours
 3600 ; retry after 1 hour
 604800 ; expire after 1 week
 86400 ) ; minimum TTL of 1 day
;
;
 IN NS dns1.lab.com.
dns1 IN A 192.168.122.1
 IN AAAA aaaa:bbbb::1
ose3-master IN A 192.168.122.60
*.cloudapps 300 IN A 192.168.122.60

Example of named configuration.

# vi /etc/named.conf 
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
 listen-on port 53 { 127.0.0.1;192.168.122.1; };
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 allow-query { localhost;192.168.122.0/24;192.168.123.0/24; };

/* 
 - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
 - If you are building a RECURSIVE (caching) DNS server, you need to enable 
 recursion. 
 - If your recursive DNS server has a public IP address, you MUST enable access 
 control to limit queries to your legitimate users. Failing to do so will
 cause your server to become part of large scale DNS amplification 
 attacks. Implementing BCP38 within your network would greatly
 reduce such attack surface 
 */
 recursion yes;

dnssec-enable yes;
 dnssec-validation yes;
 dnssec-lookaside auto;

/* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";

pid-file "/run/named/named.pid";
 session-keyfile "/run/named/session.key";

//forward first;
 forwarders {
 //10.38.5.26;
 8.8.8.8;
 };
};

logging {
 channel default_debug {
 file "data/named.run";
 severity dynamic;
 };
};

zone "." IN {
 type hint;
 file "named.ca";
};

zone "lab.com" IN {
 type master;
 file "/var/named/dynamic/lab.com.zone";
 allow-update { none; };
};

//zone "122.168.192.in-addr.arpa" IN {
// type master;
// file "/var/named/dynamic/122.168.192.db";
// allow-update { none; };
//};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

Note: I have left out reverse DNS, PTR records. If you need this you can of course add zone file and set that up but it isn’t required for a lab configuration.

Install OpenShift.

Here we are enabling ovs-subnet SDN and setting authentication to use htpasswd. This is the most basic configuration as we are doing an all-in-one setup. For actual deployments you would want multi-master, dedicated nodes and seperate nodes for handling etcd.

##########################
### OSEv3 Server Types ###
##########################
[OSEv3:children]
masters
nodes
etcd

################################################
### Set variables common for all OSEv3 hosts ###
################################################
[OSEv3:vars]
ansible_ssh_user=root
os_sdn_network_plugin_name='redhat/openshift-ovs-subnet'
deployment_type=openshift-enterprise
openshift_master_default_subdomain=apps.lab.com
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_node_kubelet_args={'maximum-dead-containers': ['100'], 'maximum-dead-containers-per-container': ['2'], 'minimum-container-ttl-duration': ['10s'], 'max-pods': ['110'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}
logrotate_scripts=[{"name": "syslog", "path": "/var/log/cron\n/var/log/maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n", "options": ["daily", "rotate 7", "compress", "sharedscripts", "missingok"], "scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true"}}]
openshift_docker_options="-log-driver json-file --log-opt max-size=1M --log-opt max-file=3"
openshift_node_iptables_sync_period=5s
openshift_master_pod_eviction_timeout=3m
osm_controller_args={'resource-quota-sync-period': ['10s']}
osm_api_server_args={'max-requests-inflight': ['400']}
openshift_use_dnsmasq=true

##############################
### host group for masters ###
##############################
[masters]
ose3-master.lab.com

###################################
### host group for etcd servers ###
###################################
[etcd]
ose3-master.lab.com

##################################################
### host group for nodes, includes region info ###
##################################################
[nodes]
ose3-master.lab.com openshift_schedulable=True

Run Ansible playbook to install and configure OpenShift.

# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Configure OpenShift

Create local admin account and enable permissions.

[root@ose3-master ~]#oc login -u system:admin -n default
[root@ose3-master ~]#htpasswd -c /etc/origin/master/htpasswd admin
[root@ose3-master ~]#oadm policy add-cluster-role-to-user cluster-admin admin
[root@ose3-master ~]#oc login -u admin -n default

Configure OpenShift image registry. Image streams are stored in registry. When you build application, your application code will be added as a image stream. This enables S2I (Source to Image) and allows for fast build times.

[root@ose3-master ~]#oadm registry --service-account=registry \
--config=/etc/origin/master/admin.kubeconfig \
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}'

Configure OpenShift router. The OpenShift router is basically an HA-Proxy that sends incoming service requests to node where pod is running.

[root@ose3-master ~]#oadm router router --replicas=1 \
    --credentials='/etc/origin/master/openshift-router.kubeconfig' \
    --service-account=router

CloudForms Integration

CloudForms is a cloud management platform. It integrates not only with OpenShift but also other Cloud platforms (OpenStack, Amazon, GCE, Azure) and traditional virtualization platforms (VMware, RHEV, Hyper-V). Since OpenShift is usually running on cloud or traditional virtualization platforms, CloudForms enables true end-to-end visibility. CloudForms provides not only performance metrics, events, smart state analysis of containers (scanning container contents) but also can provide chargeback for OpenShift projects. CloudForms is included in OpenShift subscription for purpose of managing OpenShift. To add OpenShift as provider in CloudForms follow the below steps.

The management-admin project is designed for scanning container images. A container is started in this context and the image to be scanned mounted. List tokens that are configured in management-admin project (this is created at install time).

[root@ose3-master ~]# oc get sa management-admin -o yaml
apiVersion: v1
imagePullSecrets:
- name: management-admin-dockercfg-ln1an
kind: ServiceAccount
metadata:
 creationTimestamp: 2016-07-24T11:36:58Z
 name: management-admin
 namespace: management-infra
 resourceVersion: "400"
 selfLink: /api/v1/namespaces/management-infra/serviceaccounts/management-admin
 uid: ee6a1426-5192-11e6-baff-001a4ae42e01
secrets:
- name: management-admin-token-wx17s
- name: management-admin-dockercfg-ln1an

Use describe to get token to enable CloudForms to accesss the management-admin project.

[root@ose3-master ~]# oc describe secret management-admin-token-wx17s
Name: management-admin-token-wx17s
Namespace: management-infra
Labels: <none>
Annotations: kubernetes.io/service-account.name=management-admin,kubernetes.io/service-account.uid=ee6a1426-5192-11e6-baff-001a4ae42e01

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1066 bytes
namespace: 16 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtYW5hZ2VtZW50LWluZnJhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im1hbmFnZW1lbnQtYWRtaW4tdG9rZW4td3gxN3MiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibWFuYWdlbWVudC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImVlNmExNDI2LTUxOTItMTFlNi1iYWZmLTAwMWE0YWU0MmUwMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptYW5hZ2VtZW50LWluZnJhOm1hbmFnZW1lbnQtYWRtaW4ifQ.Y0IlcwhHW_CpKyFvk_ap-JMAT69fbIqCjkAbmpgZEUJ587LP0pQz06OpBW05XNJ3cJg5HeckF0IjCJBDbMS3P1W7KAnLrL9uKlVsZ7qZ8-M2yvckdIxzmEy48lG0GkjtUVMeAOJozpDieFClc-ZJbMrYxocjasevVNQHAUpSwOIATzcuV3bIjcLNwD82-42F7ykMn-A-TaeCXbliFApt6q-R0hURXCZ0dkWC-za2qZ3tVXaykWmoIFBVs6wgY2budZZLhT4K9b4lbiWC5udQ6ga2ATZO1ioRg-bVZXcTin5kf__a5u6c775-8n6DeLPcfUqnLucaYr2Ov7RistJRvg

Add OpenShift provider to CloudForms using the management-admin service token.

CF_CONTAINER_2

Performance Metrics

OpenShift provides ability to collect performance metrics using Hawkular. This runs as container and uses cassandra to persist the data. CloudForms is able to display capacity and utilization metrics for OpenShift using Hawkular.

Switch to openshift-infra project. This is intended for running infrastructure containers such as hawkular or ELK for logging.

[root@ose3-master ~]# oc project openshift-infra

Create service account for metrics-deployer pod.

[root@ose3-master ~]# oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
 name: metrics-deployer
secrets:
- name: metrics-deployer
API

Enable permissions and set secret.

[root@ose3-master ~]# oadm policy add-role-to-user edit system:serviceaccount:openshift-infra:metrics-deployer
[root@ose3-master ~]#oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:openshift-infra:heapster
[root@ose3-master ~]# oc secrets new metrics-deployer nothing=/dev/null

Deploy metrics environment for OpenShift.

[root@ose3-master ~]# oc new-app -f /usr/share/openshift/examples/infrastructure-templates/enterprise/metrics-deployer.yaml \
-p HAWKULAR_METRICS_HOSTNAME=hawkular-metrics.apps.lab.com \
-p USE_PERSISTENT_STORAGE=false -p MASTER_URL=https://ose3-master.lab.com:8443

CloudForms Container Provider

CloudForms is a cloud management platform. It integrates not only with OpenShift but also other Cloud platforms (OpenStack, Amazon, GCE, Azure) and traditional virtualization platforms (VMware, RHEV, Hyper-V). Since OpenShift is usually running on cloud or traditional virtualization platforms, CloudForms enables true end-to-end visibility. CloudForms provides not only performance metrics, events, smart state analysis of containers (scanning container contents) but also can provide chargeback for OpenShift projects. CloudForms is included in OpenShift subscription for purpose of managing OpenShift. To add OpenShift as provider in CloudForms follow the below steps.

Use management-admin token that is created in management-admin project during install to provide access to CloudForms.

[root@ose3-master ~]# oc describe sa -n management-infra management-admin
Name: management-admin
Namespace: management-infra
Labels: <none>

Mountable secrets: management-admin-token-vr21i
 management-admin-dockercfg-5j3m3

Tokens: management-admin-token-mxy4m
 management-admin-token-vr21i

Image pull secrets: management-admin-dockercfg-5j3m3
[root@ose3-master ~]# oc describe secret -n management-infra management-admin-token-mxy4m
Name: management-admin-token-mxy4m
Namespace: management-infra
Labels: <none>
Annotations: kubernetes.io/service-account.name=management-admin,kubernetes.io/service-account.uid=87f8f4e4-4c0f-11e6-8aca-52540057bf27

Type: kubernetes.io/service-account-token

Data
====
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtYW5hZ2VtZW50LWluZnJhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im1hbmFnZW1lbnQtYWRtaW4tdG9rZW4tbXh5NG0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibWFuYWdlbWVudC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijg3ZjhmNGU0LTRjMGYtMTFlNi04YWNhLTUyNTQwMDU3YmYyNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptYW5hZ2VtZW50LWluZnJhOm1hbmFnZW1lbnQtYWRtaW4ifQ.dN-CmGdSR2TRh1h0qHvwkqnW6TLvhXJtuHX6qY2jsrZIZCg2LcyuQI9edjBhl5tDE6PfOrpmh9-1NKAA6xbbYVJlRz52gnEdtm1PVgvzh8_WnKiQLZu-xC1qRX_YL7ohbglFSf8b5zgf4lBdJbgM_2P4sm1Czhu8lr5A4ix95y40zEl3P2R_aXnns62hrRF9XpmweASGMjooKOHB_5HUcZ8QhvdgsveD4j9de-ZzYrUDHi0NqOEtenBThe5kbEpiWzSWMAkIeC2wDPEnaMTyOM2bEfY04bwz5IVS_IAnrEF7PogejgsrAQRtYss5yKSZfwNTyraAXSobgVa-e4NsWg
ca.crt: 1066 bytes
namespace: 16 bytes

Add OpenShift provider to CloudForms using token.

OSE3_login_cf

Configure metrics by supplying the service name exposed by OpenShift.

OSE3_hawkular_cf

Choose a container image to scan.

OSEV3_SMartState

You should see scanning container start in the project management-infra.

[root@ose3-master ~]# oc project management-infra
[root@ose3-master ~]# oc get pods
NAME READY STATUS RESTARTS AGE
manageiq-img-scan-24297 0/1 ContainerCreating 0 12s
[root@ose3-master ~]# oc get pods
NAME READY STATUS RESTARTS AGE
manageiq-img-scan-24297 1/1 Running 0 1m

Check image in CloudForms and you should now see an OpenSCAP report and in addition visibility into packages that are actually installed in the container itself.

Compute->Containers-Container Images->MySQL

OSEv3_SMart_State_REsults

Packages

OSEV3_Packages

OpenScap HTML Report

OSEV3_OpenScap

Aggregate Logging

OpenShift Enterprise supports Kibana and the ELK Stack for log aggregation. Any pod and container that log to STDOUT will have all their log messages aggregated. This provides centralized logging for all application components. Logging is completely integrated within OpenShift and the ELK Stack runs of course containerized within OpenShift.

In openshift-infra project create service account for logging and necessary permissions.

[root@ose3-master ~]# oc project openshift-infra
[root@ose3-master ~]# oc secrets new logging-deployer nothing=/dev/null
[root@ose3-master ~]# oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
  name: logging-deployer
secrets:
- name: logging-deployer
API

[root@ose3-master ~]# oc policy add-role-to-user edit --serviceaccount logging-deployer
[root@ose3-master ~]# oadm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd
[root@ose3-master ~]# oadm policy add-cluster-role-to-user cluster-reade system:serviceaccount:logging:aggregated-logging-fluentd

Deploy the logging stack. This creates a template or blueprint.

[root@ose3-master ~]# oc new-app logging-deployer-template \
             --param KIBANA_HOSTNAME=kibana.apps.lab.com \
             --param ES_CLUSTER_SIZE=1 \
             --param KIBANA_OPS_HOSTNAME=kibana-ops.apps.lab.com \
             --param PUBLIC_MASTER_URL=https://ose3-master.lab.com:8443
[root@ose3-master ~]# oc get pods
NAME READY STATUS RESTARTS AGE
logging-deployer-1de06 0/1 Completed 0 3m

Once logger deployer is complete, execute the template.

[root@ose3-master ~]# oc new-app logging-support-template

If you don’ see containers creating then you need to manually import image streams.

[root@ose3-master ~]#oc import-image logging-auth-proxy:3.2.0 --from registry.access.redhat.com/openshift3/logging-auth-proxy:3.2.0
[root@ose3-master ~]# oc import-image logging-kibana:3.2.0 --from registry.access.redhat.com/openshift3/logging-kibana:3.2.0
[root@ose3-master ~]# oc import-image logging-elasticsearch:3.2.0 --from registry.access.redhat.com/openshift3/logging-elasticsearch:3.2.0
[root@ose3-master ~]# oc import-image logging-fluentd:3.2.0 --from registry.access.redhat.com/openshift3/logging-fluentd:3.2.0
[root@ose3-master ~]#  oc get pods
NAME READY STATUS RESTARTS AGE
logging-deployer-9lqkt 0/1 Completed 0 15m
logging-es-pm7uamdy-2-rdflo 1/1 Running 0 8m
logging-kibana-1-e13r3 2/2 Running 0 13m

Once ELK Stack is running update deployment so that persistent storage is used (optional). Note: this requires configuring persistent storage and that is explained in a different blog post referenced above.

#vi pvc.json

{ 
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
         "name": "logging-es-1"
    },
    "spec": {
        "accessModes": [ "ReadWriteOnce" ],
            "resources": {
                "requests": {
                    "storage": "10Gi"
                }
            }
     }
}
[root@ose3-master ~]# oc create -f pvc.json
[root@ose3-master ~]# oc get dc
NAME TRIGGERS LATEST
logging-es-pm7uamdy ConfigChange, ImageChange 2
[root@ose3-master ~]# oc volume dc/logging-es-pm7uamdy --add --overwrite --name=elasticsearch-storage --type=persistentVolumeClaim --claim-name=logging-es-1

OpenShift on RHEV or VMware.

If you are running OpenShift on traditional virtualization platform ensure mac spoofing is enabled or allowed. If it isn’t the hypervisor will most likely drop packets going outbound from OpenShift. Enabling mac spoofing for RHEV is documented here under configure RHEV for OpenStack (same issue exists when running OpenStack nested in RHEV). For VMware or Hyper-V, I am not sure so just keep this in mind.

Summary

In this article we have seen how to configure an OpenShift 3.2 all-in-one lab environment. We have also seen how install and configuration can be adapted through ansible playbook. We have seen how to configure various DNS options. It should be repeated that most OpenShift problems are a direct result of improper DNS setup! We have seen how to integrate OpenShift in CloudForms and how to configure metrics using hawkular. Finally we even covered configuring log aggregation using containerized ELK stack. If you have any feedback please share.

Happy OpenShifting!

(c) 2016 Keith Tenzer

 

 

 


Adding hosts to DNS via Ansible

Standard

Overview

I’m running a dynamic Nameservice for my datacenter authorative for example.com. Now i want to add address records and reverse pointer records to dns to be able to resolv names.

Solution

To achieve this i added variables to my inventory:

cat /etc/ansible/hosts 

[glusterleft]
gluster11 fqdn=gluster11.example.com. ipaddress=172.16.20.103 reverse=103.20.16.172.in-addr.arpa.
gluster12 fqdn=gluster12.example.com. ipaddress=172.16.20.104 reverse=104.20.16.172.in-addr.arpa.

[glusterright]
gluster21 fqdn=gluster21.example.com. ipaddress=172.16.20.203 reverse=203.20.16.172.in-addr.arpa.
gluster22 fqdn=gluster22.example.com. ipaddress=172.16.20.204 reverse=204.20.16.172.in-addr.arpa.

[rhevleft]
rhev11 fqdn=rhev11.example.com. ipaddress=172.16.20.101 reverse=101.20.16.172.in-addr.arpa.
rhev12 fqdn=rhev12.example.com. ipaddress=172.16.20.102 reverse=102.20.16.172.in-addr.arpa.

[rhevright]
rhev21 fqdn=rhev21.example.com. ipaddress=172.16.20.201 reverse=201.20.16.172.in-addr.arpa.
rhev22 fqdn=rhev22.example.com. ipaddress=172.16.20.202 reverse=202.20.16.172.in-addr.arpa.

Note: It would definitely be possible to get the reverse name created out of the ip automatically. – I leave this as an shell exercise for you.

And wrote the following playbook:

[root@jump ansible]# cat named_addhosts.yml
---
- hosts: servers
 gather_facts: False
 serial: 1

tasks:
 - name: check dns
 local_action: shell host {{ fqdn }}
 register: dnsout
 ignore_errors: yes

- name: add dnsentry
 local_action: script /etc/ansible/named_update.sh {{ fqdn }} {{ ipaddress }} {{ reverse }} 
 when: dnsout.stdout.find('{{ ipaddress }}') == -1
 run_once: true
which is supported by a small helper shell script:

[root@jump ansible]# cat /etc/ansible/named_update.sh
#! /usr/bin/bash
# small script which updates dns van nsupdate
# needs 3 parameters
# hostname - full qualified (with a . at the end)
# ipaddress
# reverse - reverse ipaddress full qualified (with in-addr.arpa. ) 
if [ $# -ne 3 ]
then
 echo "usage: $0 hostname ipaddress reverse" >&2
 echo " with:" >&2
 echo " hostname - full qualified (with a . at the end)" >&2
 echo " ipaddress" >&2
 echo " reverse - reverse ipaddress full qualified (with in-addr.arpa. ) " >&2
 exit 1
fi
echo $1
echo $2
echo $3
nsupdate -k /etc/rndc.key << EOF
update add $1 3600 A $2
send
update add $3 3600 PTR $1
send
EOF
 

It is also necessary to grant access to modify the DNS server:. This access is granted through /etc/rndc.key which i put in place before.

Conclusion

I’m now easily able to add Ansible managed hosts into a dns service.


[Howto] Rebase feature branches in Git/Github

Standard
Git-Icon-1788CUpdating a feature branch to the actual state of the upstream main branch can be troublesome. Here is a workflow that works – at least for me.Developing with Git is amazing, due to the possibilities to work with feature branches, remote repositories and so on. However, at some point, after some hours of development, the base of a feature branch will be outdated and it makes sense to update it before a pull request is send upstream. This is best done via rebasing. Here is a short work flow for a typical feature branch rebase I often need when developing for example Ansible modules.
  1. First, checkout the main branch, here devel.
  2. Update the main branch from the upstream repository.
  3. Rebase the local copy of the main branch.
  4. Push it to the remote origin, most likely your personal fork of the Git repo.
  5. Check out the feature branch
  6. Rebase the feature branch to the main branch.
  7. Force push the new history to the remote feature branch, most likely again your personal fork of the Git repo.
In terms of code this means:
$ git checkout devel
$ git fetch upstream devel
$ git rebase upstream/devel
$ git push
$ git checkout feature_branch
$ git rebase origin/devel
$ git push -f
This looks rather clean and easy – but I have to admit it took me quite some errors and Git cherry picking to finally get what is needed and what actually works.
Filed under: Ansible, Debian & Ubuntu, Fedora & RHEL, Linux, Shell, Short Tip, Technology

[Howto] Rebase feature branches in Git/Github

Standard

Git-Icon-1788CUpdating a feature branch to the actual state of the upstream main branch can be troublesome. Here is a workflow that works – at least for me.

Developing with Git is amazing, due to the possibilities to work with feature branches, remote repositories and so on. However, at some point, after some hours of development, the base of a feature branch will be outdated and it makes sense to update it before a pull request is send upstream. This is best done via rebasing. Here is a short work flow for a typical feature branch rebase I often need when developing for example Ansible modules.

  1. First, checkout the main branch, here devel.
  2. Update the main branch from the upstream repository.
  3. Rebase the local copy of the main branch.
  4. Push it to the remote origin, most likely your personal fork of the Git repo.
  5. Check out the feature branch
  6. Rebase the feature branch to the main branch.
  7. Force push the new history to the remote feature branch, most likely again your personal fork of the Git repo.

In terms of code this means:

$ git checkout devel
$ git fetch upstream devel
$ git rebase upstream/devel
$ git push
$ git checkout feature_branch
$ git rebase origin/devel
$ git push -f

This looks rather clean and easy – but I have to admit it took me quite some errors and Git cherry picking to finally get what is needed and what actually works.


Filed under: Ansible, Debian & Ubuntu, Fedora & RHEL, Linux, Shell, Short Tip, Technology

Using my fedora laptop as OpenShift demo center

Standard

Overview

I’m setting up a an OpenShift demo following the helloworld-msa:

https://htmlpreview.github.io/?https://github.com/redhat-helloworld-msa/helloworld-msa/blob/master/readme.html

The laptop i’m using is a lenovo thinkpad running Fedora 23. The notebook is used for my day 2 day work and additionally as a presentation and demo laptop. Setting up the OpenShift Demo is therefore a natural step.

Preparing the Demo

Installing vagrant

I decided to install the fedora-provided vagrant. This delivers version 1.8.1 instead of 1.8.4 (which is the current version today). I will see whether this works out. I also want to use vagrant with libvirt as this is the default virtualization provider on fedora and i hope not to run into any dependency issues.

I follow this route:

https://fedoramagazine.org/running-vagrant-fedora-22/

[mschreie@mschreie ~]$ sudo dnf install vagrant
[mschreie@mschreie ~]$ sudo dnf install vagrant-libvirt
[mschreie@mschreie ~]$ sudo cp /usr/share/vagrant/gems/doc/vagrant-libvirt-0.0.32/polkit/10-vagrant-libvirt.rules /etc/polkit-1/rules.d/
[mschreie@mschreie ~]$ systemctl restart libvirtd
[mschreie@mschreie ~]$ systemctl restart polkit
[mschreie@mschreie ~]$ sudo usermod -aG vagrant mschreie
[mschreie@mschreie ~]$ 

I did not install lxc drivers (as i prefer to use docker).

Installing container tool kit

I downloaded

  • Red Hat Container Tools (cdk-2.1.0.zip)
  • RHEL 7.2 Vagrant for libvirt

from https://access.redhat.com/downloads/content/293/ver=2.1/rhel—7/2.1.0/x86_64/product-software

I moved the vagrant box somewhere in my filesystem, where i hope it fits in:

[mschreie@mschreie ~]$ sudo mkdir /VirtualMachines/vagrant
[mschreie@mschreie ~]$ sudo chown mschreie: /VirtualMachines/vagrant
[mschreie@mschreie ~]$ ln -s /VirtualMachines/vagrant vagrant
[mschreie@mschreie ~]$ mv "/Archive/RPMs&tgz/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box" vagrant/

and unpacked and installed the cdk content as:

[mschreie@mschreie RPMs&tgz]$ cd
[mschreie@mschreie ~]$ unzip /Archive/RPMs\&tgz/cdk-2.1.0.zip
[mschreie@mschreie ~]$ sudo dnf install ruby-devel zlib-devel
[mschreie@mschreie ~]$ sudo dnf install rubygem-rubyzip
Last metadata expiration check: 3:02:11 ago on Thu Jul 14 14:24:17 2016.

things run smoothly:

[mschreie@mschreie ~]$ vagrant plugin install vagrant-service-manager
Installing the 'vagrant-service-manager' plugin. This can take a few minutes...
Installed the plugin 'vagrant-service-manager (1.2.0)'!
[mschreie@mschreie ~]$ vagrant plugin install vagrant-registration
Installing the 'vagrant-registration' plugin. This can take a few minutes...
Installed the plugin 'vagrant-registration (1.2.2)'!
[mschreie@mschreie ~]$ vagrant plugin install vagrant-sshfs
Installing the 'vagrant-sshfs' plugin. This can take a few minutes...
Installed the plugin 'vagrant-sshfs (1.1.0)'!
[mschreie@mschreie ~]$ vagrant plugin install zip
Installing the 'zip' plugin. This can take a few minutes...
Installed the plugin 'zip (2.0.2)'!

Now adding the vagrant box and starting it:

[mschreie@mschreie RPMs&tgz]$ vagrant box add  --name cdkv2 ./vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box
 ==> box: Box file was not detected as metadata. Adding it directly... 
 ==> box: Adding box 'cdkv2' (v0) for provider:     
 box: Unpacking necessary files from: file:///home/mschreie/vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box 
 ==> box: Successfully added box 'cdkv2' (v0) for 'libvirt'! 
[mschreie@mschreie RPMs&tgz]$ 
[mschreie@mschreie RPMs&tgz]$ cd cdk/components/rhel/rhel-ose/ 
[mschreie@mschreie rhel-ose]$ export VM_MEMORY=8192 
[mschreie@mschreie rhel-ose]$ vagrant up 
[mschreie@mschreie rhel-ose]$ eval "$(vagrant service-manager env docker)"

I did not find a fedora package containing oc and therefor downloaded OpenShift 3.2 client (not 3.1. as linked on the documentation page) here:

https://access.redhat.com/downloads/content/290/ver=3.2/rhel—7/3.2.1.4/x86_64/product-software

[mschreie@mschreie RPMs&tgz]$ tar -xvf oc-3.2.1.4-linux.tar.gz
mnt/redhat/staging-cds/ose-clients-3.2.1.4/usr/share/atomic-openshift/linux/oc
[mschreie@mschreie RPMs&tgz]$ ln -s `pwd`/mnt/redhat/staging-cds/ose-clients-3.2.1.4/usr/share/atomic-openshift/linux/oc ~/bin/oc

Logged in via browser:

Open Openshift console: https://10.1.2.2:8443/console/
(Accept the certificate and proceed)

Use openshift-dev/devel as your credentials in CDK

or log in via cli:
[mschreie@mschreie RPMs&tgz]$ oc login 10.1.2.2:8443 -u openshift-dev -p devel

Installing the helloworld-msa demo

I’m installing necessary tools which are needed to prepare the demo. I’m using Andy Neebs scripts to speed up some things.

[mschreie@mschreie rhel-ose]$ sudo dnf install maven npm
[mschreie@mschreie frontend]$ sudo install bower -g
[mschreie@mschreie rhel-ose]$ wget https://github.com/andyneeb/msa-demo/raw/master/create-msa-demo.sh

I added a couple of lines at the top to catch the output of the script and in case of any failure i wanted the script to stop and even wanted to know where it stops. This approach is not elegant but it works…

[mschreie@mschreie rhel-ose]$ cat create-msa-demo_msi.sh 
#!/bin/bash

#catch output in a file
exec >> ./create-msa-demo.log
exec 2>&1
set +x 
 
# Cleanup
# rm aloha/ api-gateway/ bonjour/ frontend/ hola/ ola/ -rf
 
# Login and create project
oc login 10.1.2.2:8443 -u openshift-dev -p devel || exit 1
oc new-project helloworld-msa || exit 2

# Deploy hola (JAX-RS/Wildfly Swarm) microservice
## git clone https://github.com/redhat-helloworld-msa/hola || exit 3
cd hola/
git pull || exit 3
oc new-build --binary --name=hola -l app=hola || exit 4
mvn package || exit 5
oc start-build hola --from-dir=. --follow || exit 6
oc new-app hola -l app=hola,hystrix.enabled=true || exit 7 
oc expose service hola || exit 8
oc set probe dc/hola --readiness --get-url=http://:8080/api/health || exit 9
cd ..

# Deploy aloha (Vert.x) microservice
## git clone https://github.com/redhat-helloworld-msa/aloha || exit 10
cd aloha/
git pull || exit 10
oc new-build --binary --name=aloha -l app=aloha || exit 11
mvn package || exit 12
oc start-build aloha --from-dir=. --follow || exit 12
oc new-app aloha -l app=aloha,hystrix.enabled=true || exit 13
oc expose service aloha || exit 14
oc patch dc/aloha -p '{"spec":{"template":{"spec":{"containers":[{"name":"aloha","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}' || exit 15
oc set probe dc/aloha --readiness --get-url=http://:8080/api/health || exit 16
cd ..

# Deploy ola (Spring Boot) microservice
## git clone https://github.com/redhat-helloworld-msa/ola || exit 17
cd ola/
git pull || exit 17
oc new-build --binary --name=ola -l app=ola || exit 18
mvn package || exit 19
oc start-build ola --from-dir=. --follow || exit 20
oc new-app ola -l app=ola,hystrix.enabled=true || exit 21
oc expose service ola || exit 22
oc patch dc/ola -p '{"spec":{"template":{"spec":{"containers":[{"name":"ola","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}' || exit 23
oc set probe dc/ola --readiness --get-url=http://:8080/api/health || exit 24
cd ..

# Deploy bonjour (NodeJS) microservice
## git clone https://github.com/redhat-helloworld-msa/bonjour || exit 25
cd bonjour/
git pull || exit 25
oc new-build --binary --name=bonjour -l app=bonjour || exit 26
npm install || exit 27
oc start-build bonjour --from-dir=. --follow || exit 28
oc new-app bonjour -l app=bonjour || exit 29
oc expose service bonjour || exit 30
oc set probe dc/bonjour --readiness --get-url=http://:8080/api/health || exit 31
cd ..

# Deploy api-gateway (Spring Boot)
## git clone https://github.com/redhat-helloworld-msa/api-gateway || exit 32
cd api-gateway/
git pull || exit 32
oc new-build --binary --name=api-gateway -l app=api-gateway || exit 33
mvn package || exit 34
oc start-build api-gateway --from-dir=. --follow || exit 35
oc new-app api-gateway -l app=api-gateway,hystrix.enabled=true || exit 36
oc expose service api-gateway || exit 37
oc patch dc/api-gateway -p '{"spec":{"template":{"spec":{"containers":[{"name":"api-gateway","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}' || exit 38
oc set probe dc/api-gateway --readiness --get-url=http://:8080/health || exit 39
cd ..

# Deploy Kubeflix
oc create -f http://central.maven.org/maven2/io/fabric8/kubeflix/packages/kubeflix/1.0.17/kubeflix-1.0.17-kubernetes.yml || exit 40
oc new-app kubeflix || exit 41
oc expose service hystrix-dashboard || exit 42
oc policy add-role-to-user admin system:serviceaccount:helloworld-msa:turbine || exit 43

# Deploy Kubernetes ZipKin
oc create -f http://repo1.maven.org/maven2/io/fabric8/zipkin/zipkin-starter-minimal/0.0.8/zipkin-starter-minimal-0.0.8-kubernetes.yml || exit 44
oc expose service zipkin-query || exit 45

# Deploy frontend (NodeJS/HTML5/JS)
## git clone https://github.com/redhat-helloworld-msa/frontend || exit 46
cd frontend/
git pull || exit 46
oc new-build --binary --name=frontend -l app=frontend || exit 47
npm install || exit 48
oc start-build frontend --from-dir=. --follow || exit 49
oc new-app frontend -l app=frontend || exit 50
oc expose service frontend || exit 51
cd ..

# Deploy Jenkins
oc login -u admin -p admin || exit 52
oc project openshift || exit 53
oc create -f https://raw.githubusercontent.com/redhat-helloworld-msa/jenkins/master/custom-jenkins.build.yaml || exit 54
oc start-build custom-jenkins-build --follow || exit 55

oc login -u openshift-dev -p devel || exit 56
oc new-project ci || exit 57
oc new-app -p MEMORY_LIMIT=1024Mi https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/jenkins-ephemeral-template.json || exit 58
oc project helloworld-msa || exit 59

And then just ran the script:

[mschreie@mschreie rhel-ose]$ bash -x create-msa-demo_msi.sh

The script takes a while. Please check the return code directly after the script finished:

[mschreie@mschreie rhel-ose]$ echo §?
0

Seeing 0 is very good. Any other number gives you the hint which “exit – command” initiated a stop and therefor which cmd went wrong.

Additionaly it is wise to search the output for anything gone wrong:

[mschreie@mschreie rhel-ose]$ egrep -i "err|warn|not found" ./create-msa-demo.log

Testing my setup:

Access the endpoint microservices:

and the frontend itself:

To demonstrate the demo i also use some other scripts of Andy Neeb, which need to be downloaded:

[mschreie@mschreie rhel-ose]$ wget https://raw.githubusercontent.com/andyneeb/msa-demo/master/break-production.sh
[mschreie@mschreie rhel-ose]$ wget https://raw.githubusercontent.com/andyneeb/msa-demo/master/trigger-jenkins.sh

 

Using the environment / demoing

Starting and Stopping the environment

Stopping the demo

[mschreie@mschreie rhel-ose]$ vagrant halt

Starting the demo

[mschreie@mschreie ~]$ export VM_MEMORY=8192
[mschreie@mschreie ~]$ cd cdk/components/rhel/rhel-ose/
[mschreie@mschreie rhel-ose]$ vagrant up
[mschreie@mschreie rhel-ose]$ eval "$(vagrant service-manager env docker)"

Demoing the CI/CD Pipeline

Prepare an error

Prepare an error in production so we have some reason to fix:

[mschreie@mschreie ~]$ cd cdk/components/rhel/rhel-ose/
[mschreie@mschreie rhel-ose]$ bash -x break-production.sh

Note: this error was injected directly into production. You will find other builds with other behavior in dev and qa.

You might want check :

you should find an output “aloca” instead of “aloha”

Demo the CI/CD Pipeline

Now we correct the code again:

[mschreie@mschreie rhel-ose]$ sed -i 's/return String.format(Aloca mai %s, hostname);/return String.format(Aloha mai %s, hostname);/g' aloha/src/main/java/com/redhat/developers/msa/aloha/AlohaVerticle.java

and trigger the build pipeline through jenkins:

[mschreie@mschreie rhel-ose]$ bash trigger-jenkins.sh

Please look at:

https://jenkins-ci.rhel-cdk.10.1.2.2.xip.io/job/Aloha%20Microservices/

Login with: admin / password

You will see the build chain stops with “wait for approval”

Before continuing, please also check:

https://10.1.2.2:8443/console/

Login with: openshift-dev / devel

Navigate to -> helloworld-msa ->
For the aloha-helloworld-msa… please note the Image-id.

You can also click on the service and verify the output.

You should do the same for helloworld-msa-dev and helloworld-msa-qa as well.

The image-id should be identical in dev and qa and different in prod.

After approval you might see how a new pod is fired up in prod and afterwards the old pod is tiered down. Now prod should now have the same image-id.

Troubleshooting:

You might run into some issues, some of them are mentioned here with an  adequate solution:

“can’t find header files for ruby”  and/or
“zlib is missing” while installing vagrant-service-manager

[mschreie@mschreie ~]$ vagrant plugin install vagrant-service-manager

might throw the following errors:

    /usr/bin/ruby -r ./siteconf20160714-27092-1aqlxn4.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h

or:

em::Ext::BuildError: ERROR: Failed to build gem native extension.
    /usr/bin/ruby -r ./siteconf20160714-27504-ti4z51.rb extconf.rb

zlib is missing; necessary for building libxml2

You need to install additional rpms to get rid of these errors:

[mschreie@mschreie ~]$ sudo dnf install ruby-devel zlib-devel

“cannot load such file — zip” while adding vagrant box:

The following error-message expresses that the plugin named “zip” can not be loaded – after installing the additional vagrant plugin this was fixed:

[mschreie@mschreie ~]$ vagrant box add --name cdkv2 ./vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box
Vagrant failed to initialize at a very early stage:

The plugins failed to load properly. The error message given is
shown below.

cannot load such file -- zip

I fixed that with the following commands:

[mschreie@mschreie RPMs&tgz]$ sudo dnf install rubygem-rubyzip
Last metadata expiration check: 3:02:11 ago on Thu Jul 14 14:24:17 2016.
[mschreie@mschreie RPMs&tgz]$ vagrant plugin install zip
Installing the 'zip' plugin. This can take a few minutes...
Installed the plugin 'zip (2.0.2)'!

accessing the docker daemon via docker cli:

I had some issues with docker:

[mschreie@mschreie ~]$ docker ps -a -q --no-trunc
 Cannot connect to the Docker daemon. Is the docker daemon running on this host?

and fixed them via:

[mschreie@mschreie ~]$ sudo usermod -aG docker mschreie

the docker cmd run through after a relogin.

vagrant box hangs:

I experienced my box hanging repeatedly. This led to following situations:

oc- commands returned

Unable to connect to the server: net/http: TLS handshake timeout

or returned

The connection to the server 10.1.2.2:8443 was refused - did you specify the right host or port?

Also the OpenShift web UI showed pots to be unresponsive:

"This pod has been stuck in the pending state for more than five minutes."

During these hangs i could not run any command in the ssh promt of the box either. But while being responsive i checked memory. If everything is correct it should look like this:

[mschreie@mschreie rhel-ose]$ export VM_MEMORY=8192
[mschreie@mschreie rhel-ose]$ vagrant ssh
Last login: Tue Jul 19 13:15:27 2016 from 192.168.121.1
[vagrant@rhel-cdk ~]$ cat /proc/meminfo | grep MemTotal
MemTotal: 8011096 kB

If this does not show 8 GB, than you did not set the memory correctly: You need to define and export VM_MEMORY variable before starting the vagrant box.

vagrant up – issues:

[mschreie@mschreie rhel-ose]$ vagrant up
Bringing machine 'default' up with 'libvirt' provider...
Name `rhel-ose_default` of domain about to create is already taken. Please try to run
`vagrant up` command again.

I had quite some hustle to fix this, but i believe following commands made the trick:

[mschreie@mschreie rhel-ose]$ vagrant destroy
==> default: Remove stale volume...
==> default: Domain is not created. Please run `vagrant up` first.
[mschreie@mschreie rhel-ose]$ vagrant box list
cdkv2 (libvirt, 0)
[mschreie@mschreie rhel-ose]$ vagrant box remove cdkv2
Removing box 'cdkv2' (v0) with provider 'libvirt'...
Vagrant-libvirt plugin removed box only from you LOCAL ~/.vagrant/boxes directory
From libvirt storage pool you have to delete image manually(virsh, virt-manager or by any other tool)

[mschreie@mschreie rhel-ose]$ find / -name .vagrant  2>/dev/null
....
[mschreie@mschreie rhel-ose]$ rm -rf .vagrant/
[mschreie@mschreie rhel-ose]$ sudo virsh list | grep rhel-ose_default
[mschreie@mschreie rhel-ose]$ sudo virsh managedsave-remove rhel-ose_default
Removed managedsave image for domain rhel-ose_default
[mschreie@mschreie rhel-ose]$ sudo virsh  undefine rhel-ose_default
Domain rhel-ose_default has been undefined
[mschreie@mschreie rhel-ose]$ sudo rm /VirtualMachines/rhel-ose_default.img
[mschreie@mschreie rhel-ose]$ systemctl restart libvirtd
[mschreie@mschreie rhel-ose]$

And then finally:

[mschreie@mschreie rhel-ose]$ vagrant box add --name cdkv2 ~/vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'cdkv2' (v0) for provider: 
 box: Unpacking necessary files from: file:///home/mschreie/vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box
==> box: Successfully added box 'cdkv2' (v0) for 'libvirt'!
[mschreie@mschreie rhel-ose]$ vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default: -- Name: rhel-ose_default
==> default: -- Domain type: kvm
==> default: -- Cpus: 2
==> default: -- Memory: 8192M
==> default: -- Management MAC: 
==> default: -- Loader: 
==> default: -- Base box: cdkv2
==> default: -- Storage pool: default
==> default: -- Image: /var/lib/libvirt/images/rhel-ose_default.img (41G)
==> default: -- Volume Cache: default
==> default: -- Kernel: 
==> default: -- Initrd: 
==> default: -- Graphics Type: vnc
==> default: -- Graphics Port: 5900
==> default: -- Graphics IP: 127.0.0.1
==> default: -- Graphics Password: Not defined
==> default: -- Video Type: cirrus
==> default: -- Video VRAM: 9216
==> default: -- Keymap: en-us
==> default: -- TPM Path: 
==> default: -- INPUT: type=mouse, bus=ps2
==> default: -- Command line : 
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
 default: 
 default: Vagrant insecure key detected. Vagrant will automatically replace
 default: this with a newly generated keypair for better security.
 default: 
 default: Inserting generated public key within guest...
 default: Removing insecure key from the guest if it's present...
 default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Registering box with vagrant-registration...
 default: Would you like to register the system now (default: yes)? [y|n]n
==> default: Configuring and enabling network interfaces...
Copying TLS certificates to /home/mschreie/cdk/components/rhel/rhel-ose/.vagrant/machines/default/libvirt/docker
==> default: Rsyncing folder: /home/mschreie/cdk/components/rhel/rhel-ose/ => /vagrant
==> default: Running provisioner: shell...
 default: Running: inline script
==> default: Created symlink from /etc/systemd/system/multi-user.target.wants/openshift.service to /usr/lib/systemd/system/openshift.service.
==> default: Running provisioner: shell...
 default: Running: inline script
==> default: Successfully started and provisioned VM with 2 cores and 819 MB of memory.
==> default: To modify the number of cores and/or available memory set the environment variables
==> default: VM_CPU respectively VM_MEMORY.
==> default: You can now access the OpenShift console on: https://10.1.2.2:8443/console
==> default: To use OpenShift CLI, run:
==> default: $ vagrant ssh
==> default: $ oc login 10.1.2.2:8443
==> default: Configured users are (<username>/<password>):
==> default: openshift-dev/devel
==> default: admin/admin
==> default: If you have the oc client library on your host, you can also login from your host.
[mschreie@mschreie rhel-ose]$

trouble installing “frontend”

Some of my microservices just did not work. Even though i could not find any “Error” in the output of the script. For the troublesome services I ran the commands one by one by hand and found some error somewhere  in the output of

[mschreie@mschreie frontend]$ npm install
......
> bower install

sh: bower: command not found

no “error” keyword to grep for  -perhaps the “WARN” messages are related to that or grepping for “not found” would have helped.

After the following additional install, things ran smoothly.

[mschreie@mschreie frontend]$ sudo install bower -g

Interesting links:

 

The Red Hat Container Deployment Kit  – getting started guide:
https://access.redhat.com/documentation/en/red-hat-container-development-kit/2.1/getting-started-guide/

The main page from which i set up my demo:
https://htmlpreview.github.io/?https://github.com/redhat-helloworld-msa/helloworld-msa/blob/master/readme.html

Andy Neebs scripts:
https://github.com/andyneeb/msa-demo

RH-internal:
http://jenkinscat.gsslab.pnq.redhat.com:8080/view/CDK/job/doc-CDK_Installation_Guide%20%28html-single%29/lastSuccessfulBuild/artifact/index.html#troubleshooting_container_development_kit_problems

Conclusion

We managed to set up an OpenShift demo on our laptop using vagrant. We demonstrate a possible solution for typical demands on automated but controlled deployment chains.


OpenStack Networking 101 for non-Network engineers

Standard

OpenStack-Neutron-Fits-like-Lego

Overview

In this article we will take a deeper look into OpenStack networking and try to understand general networking concepts . We will look at how various networking concepts are implemented within OpenStack and also discuss SDNs, network scalability and HA.

The most complex service within OpenStack is certainly Neutron. Networking principles have not changed, however Neutron provides a lot of new abstractions that make is rather difficult to follow or understand traffic flows. On top of that there are many, many ways to build Network architectures within Neutron and a huge 3rd party ecosystem exists around Neutron that can make things even more confusing.

Networking Basics

You cannot really start a discussion around networking basics without mentioning the OSI model so that is where we will begin as well.

basics_osimodel

The OSI model identifies 7 layers, for the purposes of Neutron we are primarily concerned with layer 1 (physical), layer 2 (data link ), layer 3 (network) and layer 4 (transport). Ethernet data is transmitted in packets on layer 1. An Ethernet frame is encapsulated within a packet on layer 2.

ethernet_frame

Ethernet frames have source and destination MAC addresses however do not include routing information. Layer 2 can only broadcast on the local network segment. It does have a place holder for VLAN ID so traffic can be delivered to correct network segment based on VLAN. A VLAN is nothing more than a logical representation of a layer 2 segment.

simple_lan

Each host on the layer 2 network segment can communicate with one another using Ethernet frame and specifying source / destination MAC address. ARP (Address Resolution Protocol) is used to find out the location of a MAC address.

arp_example

Once a MAC address has been discovered it is cached on the clients and stored in the ARP cache.

arp_cache

Traffic bound for MAC addresses that don’t exist on layer 2 network segment must be routed over layer 3. In other words layer 3 simply connects multiple layer 2 networks together.

lan_with_router

In this example we have three class C (255.255.255.0) network subnets. Communication between subnets requires layer 3 routing. Communication within subnet uses layer 2 ethernet frame and ARP. ICMP (Internet Control Messaging Protocol) works at layer 3, tools that use ICMP are ping and mtr. Layer 3 traffic traverses networks and each device has a routing table that understands the next hop.

IP

We can look at the routing table and using commands like “ip route get”, “traceroute” and “tracepath” we can understand traffic patterns within layer 3 network.

Layer 4 is of course where we get into TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP is a reliable protocol that ensures flow control, retransmission and ordered delivery of packets. Ports or socket streams are used to uniquely identify applications communicating with one another. Port range that exists is 1 – 65535 with 1-1023 being reserved for system ports. The default ephemeral port range in Linux is 32768 – 61000.

UDP unlike TCP is a connectionless protocol. Since delivery and sequential ordering are not guaranteed, UDP is not a reliable protocol. Common applications important in OpenStack ecosystem that use UDP are DHCP, DNS, NTP and VXLAN.

Network Tunneling

Tunneling allows a network to support a service or protocol that isnt natively supported within network. Tunneling works by encapsulating metadata into IP packet header. It allows for connecting dissimilar networks, encapsulating services like IPV6 in IPV4 and securely connecting non-trusted networks such as is the case with VPNs. Open vSwitch (out-of-box) SDN provided with OpenStack supports following tunneling protocols: GRE,  (Generic Routing Encapsulation) VXLAN (Virtual Extensible LAN) and GENEVE (General Network Virtualization Encapsulation).

Network Namespaces

Linux network namespaces allow for more granular segregation of software-defined networks. Since namespaces are logically segregated there is no overlap of ip ranges. In order to see networking within namespace commands such as ip, ping, tcdump, etc need to be executed within namespace.

To list network namespaces use below command.

# ip netns show

qdhcp-e6c4e128-5a86-47a7-b501-737935680090
qrouter-e7d9bf3c-22a7-4413-9e44-c1fb450f1432

To get list of interfaces use below command.

# ip netns exec qrouter-e7d9bf3c-22a7-4413-9e44-c1fb450f1432 ip a

namespaces

Network Concepts Applied to OpenStack

Now that we have a basic overview of networking lets see how this is applied to Neutron. First Neutron is software-defined, certainly you need hardware (switches, routers, etc) but Neutron does not concern itself directly with hardware. It is an abstraction that works with layer 2, layer 3 and layer 4. Neutron defines two types of networks: tenant and provider.

Tenant Network

A tenant network is a layer 2 network that exists only within the OpenStack environment. A tenant network spans compute nodes and tenant networks are isolated from one another. A tenant network is not reachable outside the OpenStack environment. The main idea behind tenant networks is to abstract network complexity from consumer. Since tenant networks are isolated you dont have to worry about IP address range conflicts. This allows creating new networks in a simple scalable fashion.

Floating IPs

Neutron creates a abstraction around IP ranges, tenant networks are completely isolated from real physical networks. In OpenStack an instance gets a tenant IP. You can certainly put your tenant networks on physical networks but then you lose a lot of scalability and flexibility, hence most OpenStack deployments use Floating IPs to connect instances to the outside world. Floating IPs are an SNAT/DNAT that get created in iptables of qrouter network namespace. From within instance you will only see tenant IP, not any floating IPs. Floating IPs are only needed for accessing a tenant from outside, for example connecting via ssh.

Provider Network

A provider network connects to a physical network that exists outside of OpenStack. IN this case each instance gets a IP on the external physical network. Floating IPs are not used or needed. From a networking standpoint using provider networks makes things simple but you lose a lot of flexibility and scalability. Each compute node needs a physical connection to each provider network. Usually what most do is create a large bond and due VLAN tagging on bond to access provider networks.

Traffic Flows

Both north/south and east/west traffic flows exist within an OpenStack environment. A north/south traffic flow occurs when traffic is leaving OpenStack environment and its source or destination is a external network. A east/west traffic flow exists when instances within a tenant network or between tenant networks communicate with one another. Traffic between tenant networks to external networks requires layer 3 (unless using provider networks) and that means routing is involved through the Neutron l3-agent. Traffic within tenant network occurs at layer 2 through Neutron l2-agent.

Network Architectures

OpenStack Neutron offers a vast choice of networking architectures. Out-of-the-box the Neutron OVS Reference Architecture or Nova network can be configured. By integrating with 3rd party SDNs (software-defined networks) the l3-agent within Neutron is replaced by the SDN. Using provider networks also bypasses network overlay and VXLAN or GRE encapsulation.

High Availability

In OpenStack you will deploy either the Neutron reference architecture or an external SDN. The Neutron reference architecture uses haproxy to provide HA for the l3-agent running on OpenStack controllers. This of course creates a scalability bottleneck since all routed traffic needs to go through l3-agent and it is running on controllers. I have seen the neutron reference architecture have performance issues around 1000 instances but this can vary depending on workload.

Scalability

As mentioned the l3-agent in Neutron can become a bottleneck. To address this you have two options DVR (Distributed Virtual Routing) or 3rd Party SDN. DVR allows the l3-agent to run on compute nodes and this of course scales a lot better, however this is not really supported in all OpenStack distros and can be very challenging to troubleshoot. The best option to scale network beyond 1000 instances is at a 3rd party SDN. Neutron will still acts as abstraction in front of SDN but you wont need l3-agent, the SDN will handle this with a more scalable solution. Using SDN is also in my opinion a cleaner approach and allows network teams to maintain network control as they did in pre-openstack era. In future openvswitch should get its own SDN controller to be able to offload l3 traffic but this is not quite ready at this time.

Examples

Below we will look at configuring OpenStack to use provider network and Floating IP network.

Prerequisites

Below are some prerequisites you need to implement within your OpenStack environment.

Get CentOS 7.2 Cloud Image.

curl -O http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
glance image-create --name centos72 --visibility public --disk-format qcow2 --container-format bare --file CentOS-7-x86_64-GenericCloud.qcow2

Create Security Group.

# nova secgroup-create all "Allow all tcp ports"
# nova secgroup-add-rule all TCP 1 65535 0.0.0.0/0
# nova secgroup-add-rule all ICMP -1 -1 0.0.0.0/0

Create a private ssh key for connecting to instances remotely.

# nova keypair-add admin

Create admin.pem file and add private key from output of keypair-add command.

# vi /root/admin.pem
# chmod 400 /root/admin.pem

Example: Provider Network

# neutron net-create external_network --shared --provider:network_type flat --provider:physical_network extnet

Configure Provider Network Subnet.

# neutron subnet-create --name public_subnet --allocation-pool=start=192.168.0.200,end=192.168.0.250 --gateway=192.168.0.1 external_network 192.168.0.0/24 --dns-nameserver 8.8.8.8

Enable isolated metadata server. Metadata server is used for injecting cloud-init and part of bootstrapping process. Other option is to setup route in iptables that goes from metadata server to gateway of provider network.

# vi /etc/neutron/dhcp_agent.ini
enable_isolated_metadata = True

Get the Network Id

# neutron net-list
+--------------------------------------+------------------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------------------+-----------------------------------------------------+
| 459f477d-4c67-4800-ad07-adb9b096caf5 | external_network | 84c5e031-ed09-4ec0-86e4-609b27e21efb 192.168.0.0/24 |
+--------------------------------------+------------------+-----------------------------------------------------+

Start instance on the provider network

# nova boot --flavor m1.medium --image "centos72" --nic net-id=459f477d-4c67-4800-ad07-adb9b096caf5 --key-name admin --security-groups all mycentos

Connect to mycentos instance using the private ssh key stored in the admin.pem file. Note: The first floating IP in the range 192.168.122.201.

# ssh -i admin.pem cirros@192.168.122.201

Example: Floating-ip Network

# neutron net-create private
# neutron subnet-create private 10.10.1.0/24 --name private_subnet --allocation-pool start=10.10.1.100,end=10.10.1.200

Create public network. Note: these steps assume the physical network connected to eth0 is 192.168.122.0/24.

# neutron net-create public --router:external
# neutron subnet-create public 192.168.122.0/24 --name public_subnet --allocation-pool start=192.168.122.100,end=192.168.122.200 --disable-dhcp --gateway 192.168.122.1

Add a new router and configure router interfaces.

# neutron router-create router1 --ha False
# neutron router-gateway-set router1 public
# neutron router-interface-add router1 private_subnet

List the network IDs.

# neutron net-list
 +--------------------------------------+---------+-------------------------------------------------------+
 | id | name | subnets |
 +--------------------------------------+---------+-------------------------------------------------------+
 | d4f3ed19-8be4-4d56-9f95-cfbac9fdf670 | private | 92d82f53-6e0b-4eef-b8b9-cae32cf40457 10.10.1.0/24     |
 | 37c024d6-8108-468c-bc25-1748db7f5e8f | public  | 22f2e901-186f-4041-ad93-f7b5ccc30a81 192.168.122.0/24 |

Start instance on the provider network

# nova boot --flavor m1.medium --image "centos72" --nic net-id=459f477d-4c67-4800-ad07-adb9b096caf5 --key-name admin --security-groups all mycentos

Create a floating IP and assign it to the mycirros instance.

# nova floating-ip-create
# nova floating-ip-associate mycirros <FLOATING IP>

Connect to mycentos instance using the private ssh key stored in the admin.pem file. Note: The first floating IP in the range 192.168.122.201.

# ssh -i admin.pem cirros@192.168.122.201

Summary

In this article we looked at basic network concepts and applied them to OpenStack. We saw various network implementations like provider networks and floating ip networks. Finally we implemented these networks in an OpenStack environment. I hope you found this article useful. I think the most challenging aspect of OpenStack is networking. If you have material or additional information please share.

Happy OpenStacking!

(c) 2016 Keith Tenzer