So far, this sounds like a lot of effort to achieve a little more than a plain docker host – containers that can talk to each other and to the host network, potentially segregated based on kubernetes namespace. However OpenShift SDN also allows pods on different nodes to communicate with each other.
To this end, it establishes VXLAN tunnels to the various OpenShift Nodes. VXLAN tunnels all layer2 traffic over IP via UDP port 4789. The vxlan0 device is connected to the br0 ovs bridge and can from there reach all pods and containers on the same node. Where the multitenant SDN plugin used ovs flow keys to segregate network traffic on the br0, is uses VXLAN virtual network IDs to separate traffic on the wire.
This capability does not extend to plain docker containers, i.e. they cannot communicate with either pods or other plain docker containers on another node. This means plain docker containers are limited to communicate with other containers and pods running on the same node as well as the any host connected to the physical network(s).
Inter-Node networking therefore adds the following flow:
Between pods on different nodes: PodA eth0 → vethXXXX → (ovs) br0 → vxlan0 (L3 encapsulation) → (tunnel via host network) → vxlan0 (L3 deencapsulation) → br0 → vethYYYY → Pod eth0
See also: OpenShift SDN Networking: https://docs.openshift.com/enterprise/3.1/admin_guide/sdn_troubleshooting.html#sdn-flows-inside-a-node