Bobbycar Part 1- Building a cloud-native IoT architecture with modern Red Hat technologies

June 28, 2021

Introduction

In this article series we are going to build together a sample cloud-native IoT architecture, called Bobbycar, based on the Red Hat OpenShift Container platform and a lot of other relevant technologies. The first part of the series should provide you with the necessary background on IoT-, Edge- and Cloud Computing and also briefly describe the architecture and components that we are going to build.

Why do we need new architectures?

Let us start by looking back at the industrial (r)evolution. Even if no agenda was conceived several hundred years ago, we can see today how mass production and standardization were established with Industry 1.0 and 2.0, followed by Industry 3.0 with automation and computer technology.

We are currently in the middle of the fourth industrial revolution that is driven by information. Almost every branch of production is changing in the course of the massive digitization of processes and products. The resulting strongly increasing amounts of data and the necessary computing capacities have led to a shift in software architectures to cloud-native / 12-factor applications and the massive expansion of networking through powerful 5G infrastructures will further distribute computing capacities away from cloud regions to the edges of the networks. 

Probably the most important driver can be specifically named: The explosive increase in intelligent devices and their networking. Also known as the Internet of Things (IoT), it is changing all traditional industries. From automotive to insurance, energy and healthcare. There are already over 8 billion IoT end devices and an analysis by Gartner predicts that the number of IoT endpoints installed worldwide will exceed 9 billion by 2021. For a long time, the performance of these devices was heavily dependent on their network connection, but with 5G, an optimized and scalable technology is now available to carry out extensive data acquisition and real-time analyzes, and to use the results for automated decision-making.

While cloud computing and IoT were often viewed separately in the past few years, it is now clear that complex systems can only be implemented with a seamless architectural approach that combines both technologies and enables the endless amounts of data to be processed.

Edge and cloud computing

The successful implementation of an IoT architecture must therefore combine or integrate the edge computing approach with that of cloud computing in a suitable manner.

Cloud computing describes the approach of providing IT infrastructures such as computing capacity, storage and connectivity mostly centrally via technically consumable interfaces. The self-service concept, a high level of scalability and a usage-dependent price model are further decisive features. In order to be able to use the technical capabilities of the cloud on the application side, the implementation of so-called microservices or message-driven architectures is often used. 

Edge computing describes the approach to data processing at the location where the data is generated – that is, decentralized, at the edge of the network, for example on the sensors or gateways themselves. Only relevant data or aggregated interim results are then used for further central processing. A huge amount of data is generated at the edge of the networks, and there is a need to transfer this from multiple production sites to local data centers and various cloud environments in order to use AI or ML technologies in a centralized manner and gain data-driven insights.

Challenges and Demand

Such an IoT / Cloud / Edge architecture is overwhelming at first glance with its heterogeneity. A wide variety of device types, different partial architectures and protocols, various physical locations, different workload characteristics, multi and hybrid cloud environments, machine learning, data science, security across the entire stack, monitoring, etc.

The complete solution therefore does not consist of one technology, product or even architecture concept that guarantees successful implementation. The aim is to find a modular, scalable architecture that supports the need-oriented addition or removal of functions and already supports as many requirements and use cases as possible.

Standard IoT architecture

There are various reference models and descriptions of IoT architectures. Depending on the level of detail, they have four to seven layers. An overview of currently widespread models can also be found at

https://www.iaas.uni-stuttgart.de/publications/INBOOK-2018-01-A-Detailed-Analysis-of-IoT-Platform-Architectures-Concepts-Similarities- and-Differences.pdf 

For a basic understanding we only consider a simplified model with four layers:

Perception Layer: IoT devices / things that are equipped with sensors. Sensors collect the data and transmit it over a network, and actuators carry out actions.

Connectivity Layer: Sends the data from the IoT devices to the next higher layer. This can be via 2G / 3G / 5G, or via WiFi, ZigBee, Bluetooth or other industrial protocols. 

IoT Platform Layer: A layer similar to middleware. Processes, distributes and stores the data coming from the transport layer. Technologies such as Apache Kafka, traditional messaging systems, time series databases (TSDB), big data and stream data processing are usually used at this level.

Application Layer: The technical applications are managed on this level. There are various IoT applications that differ in complexity and functionality and use different technology stacks and operating systems. Some examples are device monitoring and control software, mobile apps, business intelligence services, and other solutions that implement machine learning. These applications are based directly on the IoT platform.

The criteria that a specific IoT solution should or must meet determine the choice of suitable technologies and IoT platforms. The starting point should always be an evaluation of the functional and non-functional requirements in the context of a catalog of criteria.

As an example, the following list applies as a first approach:

  • Functionality and scope of functions
  • Performance
  • Scalability
  • Operability
  • Maintainability
  • Connectivity, supported protocols
  • Device types and classes
  • Security (across the entire stack)
  • Portability (of applications and middleware components)
  • Cloud capability 
  • Interfaces and APIs
  • Automation
  • Fault tolerance and reliability
  • Observability
  • Interoperability
  • Governance 
  • Data protection

Since a large part of the applications of an IoT solution are usually operated in different cloud environments, the requirements outlined often lead to the requirement for a secure and powerful hybrid cloud infrastructure for end-to-end data processing support.

Cloud-native IoT platform

It is a great advantage to establish a uniform platform with consistent development and operational experience, if possible across all cloud environments. Such a hybrid cloud-native IoT platform already covers many of the requirements mentioned, such as automated scale-up and scale-down of computing and storage capacity in accordance with policies. 

The portability of applications is very important. Often, applications are developed centrally and then have to be rolled out into the various cloud environments down to the edge components. 

Furthermore, the platform must be able to cope with sudden changes in the amount of IoT data generated without negatively affecting the entire system in any way. 

From a technological point of view, container technology and Kubernetes as a container orchestration framework have also increasingly proven to be the right choice in an IoT and edge environment. 

The cloud-native approach is also very well suited creating scalable, cost-efficient and reliable IoT solutions. 

In contrast to traditional applications, which you simply deploy and operate in the cloud, cloud-native applications are developed to take advantage of cloud computing. 

Cloud-native is an agile, conceptual method to develop and operate applications completely in the cloud and for the cloud. Cloud-native is about how applications are created and being deployed, not where. Therefore, cloud-native applications usually have the following things in common:

Container

Containers are a packaging format for applications including their runtime environments. In contrast to virtual machines, containers do not have their own operating system, but share the kernel of the operating system on which they are installed. Containers run completely isolated from each other and thus provide virtualization at the application level. Containers are lightweight, start very quickly, can be scaled very well and, in particular, ported to other environments.

Microservices

Containers and microservices often go hand in hand. With a microservices architecture, an application is created in the form of independent components or modules that can be executed, scaled and managed as independent services. The services communicate with one another via well-defined interfaces and APIs. The services are usually developed by a dedicated agile team based on a technical domain or function. Domain Driven Design (DDD) is often used as the preferred approach.

DevOps & Continuous Delivery

To really take advantage of the potential of cloud-native applications, you must rethink not only the way in which applications are built, but also the way in which the company and the teams are organized. To put simply, DevOps combines the traditionally isolated roles of development and operations. 

In practice, DevOps has many forms of optimized processes between Dev and Ops, improved communication, expanded responsibilities and interdisciplinary and fully self-organizing teams. Oftentimes, a DevOps mindset means implementing small, frequent updates that allow you to respond more quickly to customer needs and fix bugs more quickly.

Continuous delivery is also a core component of DevOps, in the way that teams can examine and understand the effects of every change in detail and are able to roll out a stable state of the software at any time!

Let’s build Bobbycar, our demo IoT architecture

Now let us have a look on how an IoT solution or a subset of it could actually look like? We will explore that in more detail using an exemplary demo implementation, Bobbycar, that we’ ll build together during the course of this article series.

Bobbycar is a distributed cloud-native application that implements key aspects of a modern IoT architecture in an exemplary manner. This demo is based on Red Hat’s Kubernetes Distribution, Red Hat OpenShift Container Platform, and uses various middleware components optimized for a cloud-native usage. 

From a technical point of view, there are two core concepts: Bobbycars and Bobbycar zones. 

Bobbycars:

Bobbycars are vehicle simulators implemented in Quarkus (cloud-native Java stack) that simulate vehicles (connected cars) and send telemetry data to a regional IoT cloud backend. In this demo they represent the vehicle edge.

Bobbycar Zone:

A Bobbycar Zone represents a location based configuration, e.g. an environmental zone, for which a maximum CO2 emission has been defined. Or a listing of various mobility services that are made available at this location. The Bobbycar zones are implemented as Kubernetes Custom Resource.   

For each simulated vehicle, a route is selected at random from a pool of routes at the beginning. Driving this route from start to end is simulated and the current position as well as current telemetry data such as speed, RPM, CO2 emissions … are sent to a regional IoT cloud backend infrastructure. This is done by streaming all sensor data from the vehicles via MQTT to local Kafka clusters. 

Apache Kafka is the central system for all incoming data. The incoming data is made available to a real-time dashboard for visualization via websocket and also updates a distributed in-memory cache. So the current status of the entire IoT system can be retrieved from the cache at any time. 

In order to integrate all components, the cloud-native integration framework Apache Camel-K is going to be used. Specifically, the integration of MQTT to Kafka, the integration of Kafka into the cache as well as the Websocket endpoints and the Cache REST API have been implemented with Camel-K.

When vehicles enter or leave a zone, a zone change event is triggered. This event is made available to the respective vehicle as an MQTT message, and it is also used in the form of a cloud event to spin up serverless services and functions.

The updated zone configuration is not pushed into the vehicles, the vehicles receive the current configuration via the cache API after a zone change event.

Kafka Mirror Maker transfers the incoming data from the regional Kafka clusters to the Kafka clusters in the central IoT cloud in order to persist an aggregated status of all locations and to enable stream analytics for example.

Furthermore, the relevant data from the regional Kafka clusters are stored in an S3 compatible data lake in the central IoT cloud, and are then used for machine learning.

Central components

In the next step, we would like to take a closer look at the central components of the Bobbycar IoT architecture and their interaction.

Apache Kafka as the central streaming platform

Apache Kafka is a real-time streaming platform that can be used in wide acceptance by both large and small companies. Kafka’s microservices architecture and Pub / Sub protocol are ideal for transferring real-time data between various systems and applications. On Github, Kafka is one of the most popular Apache projects and, without a doubt, Kafka has changed the way companies move data in their cloud and data center.

Kafka has been optimized to stream data between systems and applications as quickly as possible, and in particular in a scalable way. There are productive Kafka cluster environments that process more than 15 million messages per second at a throughput rate of over one terabit per second.

In contrast to traditional messaging systems, a lot of the intelligence lies in the respective Kafka clients and not in the broker itself. The clients are very closely connected to the Kafka cluster and must, for example, know the IP address of the Kafka cluster and have direct access to all broker nodes. Kafka is very suitable for communication between systems within a trustworthy network with a stable infrastructure, IP addresses and connections.

How to use Kafka in IoT solutions?

The Kafka architecture is not really suitable for IoT use cases in which thousands or even millions of devices are connected directly to the cloud via the Internet. This also applies to this demo scenario, in which potentially millions of Bobbycars transfer their data to the cloud.

Some reasons why Kafka is not well suited for such kind IoT use cases:

  • Kafka lacks important IoT functions such as Keep Alive, Last Will and Testament. These features are important in creating a resilient IoT solution that can handle devices experiencing an unexpected loss of connectivity.
  • Kafka brokers must be directly reachable by the clients, which means that each client must be able to connect directly to each of the Kafka broker nodes. Making all Kafka brokers accessible from the Internet or putting a load balancer in front of them is therefore not a good strategy.
  • Kafka doesn’t support many topics. When connecting millions of IoT devices via the public Internet, unique topics are often used, which also often have a device identifier in the topic name, so that access by individual clients can be restricted. 
  • Kafka clients are relatively complex internally, require a stable TCP connection and are optimized for throughput. In IoT environments, however, you often have to deal with unreliable networks and infrastructures.

MQTT for device connectivity

Despite the criteria listed, Kafka is still an essential component of the Bobbycar IoT architecture. Because Kafka is optimally set up to process large amounts of real-time data. The question is: how do we connect the IoT data and devices to the Kafka clusters?

This is where another pub / sub protocol often comes into the picture, MQTT. 

MQTT is a lightweight protocol that requires a small client footprint on the devices. It supports millions of connections, even over unreliable networks, and works seamlessly in high-latency, low-throughput environments. It has become the de-facto standard for connectivity between IoT devices.

Among other things, it includes the necessary IoT functions such as keep-alive, last-will and testament, three quality of service levels for reliable messaging, as well as client-side load balancing (shared subscriptions) for public Internet communication. Topics are dynamic, which means that a large number of MQTT topics, often millions, can exist in MQTT clusters.

MQTT and Kafka were developed for different requirements, but they both work very well together. So the question is how can we integrate the two systems appropriately.

The integration of IoT devices, or, as in this case, connected vehicles, into a local Kafka cluster is usually a stand-alone project, as there are IoT-specific challenges to be considered. The integration can be implemented with various technologies, for example with Kafka Connect, MQTT proxies, REST proxy based HTTP communication or even low-level clients in C. All of these components can be integrated very well with Kafka.

In our Bobbycar architecture, the cloud-native integration framework Apache Camel-K was used to integrate MQTT with Kafka, and an HTTP (S) -based Kafka bridge for the integration of the position data / GPS.

Cloud-native integration with Apache Camel-K

The Apache Camel Framework is a powerful, but very lightweight and versatile Open Source framework that is based on the well-known Enterprise Integration Pattern (EIP). 

Apache Camel-K is a lightweight cloud integration platform based on the Apache Camel framework. It runs natively on Kubernetes and Openshift and was specially developed for Serverless and microservice architectures. 

It is based on the Kubernetes “operator pattern” and uses the Operator SDK to perform operations on Kubernetes resources. The runtime environment is JVM-based and uses over 300 components or connectors that are already available in Apache Camel. The development approach with Camel-K is minimalistic: You only write a single file with the respective integration logic and you can run it immediately on any Kubernetes cluster. This approach is very similar to many FaaS platforms.

Serverless is also an important area that Camel-K is targeting. For example, the Serverless framework contained in the OpenShift platform, based on Knative, can be used to automatically scale these cloud-native integration components based on the actual load, or even to scale them to zero.

In this demo architecture, all integration components were therefore implemented on the basis of Apache Camel-K.

Distributed in-memory cache

IoT backend applications must be able to record a lot of data from IoT devices, process it in real time and take appropriate decisions and actions. MQTT in combination with Kafka is very suitable for this. As a rule, the data considered relevant for the further processing of sometimes hundreds of specific applications, warehouse solutions, batch jobs, etc. must also be available permanently.

This is only possible if the data pipelines in the backend do not have any bottlenecks. Normal databases cannot handle such a transaction volume. A scalable distributed in-memory cache is therefore ideal for such requirements.

The advantages are lower network utilization, faster response times, higher performance and higher availability. 

In the Bobbycar demo described, the telemetry data of all vehicles is continuously updated in the in-memory cache so that the current status of the entire system can be recalled at any time. The current zone configurations are also kept centrally in the cache and made available via an API.

Summary

The implementation of an IoT solution requires the know-how and expertise of many specialist and technical domains. It makes sense to start with the evaluation of suitable IoT architecture components.

An open, scalable, secure and powerful hybrid cloud-native platform, that can be used in all cloud environments and operated centrally could be a good basis for this. I think it is also important to note that when you take the path to the cloud-native world, you should also take a close look at the internal organizational and team structures and adjust them accordingly. For a successful implementation, the organizational structures in a company and the technologies must match and harmonize. 

Technologies alone do not solve problems. People do it!

With the example of Bobbycar, we want to show how a large number of networked vehicles with a high data volume can be connected to an IoT cloud, and which components are suitable and can be used in this context.

All central components such as MQTT, Kafka, Camel-K etc. are containerized and optimized for cloud-native purposes, and can be installed and operated in a declarative way. They can also be deployed into various cloud environments using a GitOps approach.

Outlook on Part 2

In the next part of the series we’ ll concentrate on building our “Bobbycar”, the cloud-native vehicle simulator, implemented with Quarkus, the cloud-native optimized Java stack.

So, please stay tuned…

co-authored by Markus Eisele, Developer Adoption Lead Red Hat, EMEA

2 replies on “Bobbycar Part 1- Building a cloud-native IoT architecture with modern Red Hat technologies”

Leave a Reply

close

Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.