How to secure microservice applications with role-based access control? (7/7)

May 15, 2023

Foto Source: Darrel Und (

Option: Service Mesh

We have already introduced many different options how to secure micro-service applications based on roles (RBAC). In the last part of this series, we will explore how to use a Service Mesh for RBAC. For the implementation of the Service Mesh we are using the Open Source project Istio.

In Part 1, we’ve provided the context for the whole blog series. We recommend you read it first because otherwise, you miss out on the context.

You find below an overview of the content of these blog series. Just click on the link to jump directly to the respective blog part:

Blog PartImplementation OptionDescription
(2/7)HTTP Query ParamThis is the most basic module where the “role” is transferred as a HTTP Query Parameter. The server validates the role programmatically.
(3/7)Basic AuthenticationA user agent uses Basic Authentication to transfer credentials.
A JSON Web Token (JWT) codifies claims that are granted and can be objectively validated by the receiver.
OpenID and Keycloak

For further standardization, OpenID Connect is used as an identity layer. Keycloak acts as an intermediary to issue a JWT token.
Proxied API Gateway (3Scale)ServiceB uses a proxied gateway (3Scale) which is responsible for enforcing RBAC. This is useful for legacy applications that can’t be enabled for OIDC.
This blogService Mesh

All services are managed by a Service Mesh. The JWT is created outside and enforced by the Service Mesh.

What do we want to achieve in this blog part?

Let’s first explain what a Service Mesh is:

A service Mesh can be used for many use cases, e.g. application-wide tracing, advanced deployment strategies, dark launches, etc. Role-based Access Control alone would not justify the use of a Service Mesh, but could be applied as an additional benefit. If you want to explore the full potential of a ServiceMesh, please check out the Istio Tutorial on Red Hat Scholars page.

From an architecture point of view, a Service Mesh injects a so-called side-car component which then take care of the above listed use cases. These side-car components communicate with a central control plane, receive instructions and report back data. If it is deployed in a Kubernetes environment, the side-car component is deployed as a container in the same pod as the application component.

For this implementation, we are using the Service Mesh component of OpenShift. This Service Mesh component is installed via an Operator, thus we need to have an OpenShift cluster where an Operator can be installed. The “Developer Sandbox” that we have used in the previous post doesn’t allow this flexibility. Thus, we need to either provision a Managed OpenShift cluster or install an OpenShift cluster on our infrastructure (self-managed).


We are using the following tools:

  • Maven: 3.8.6
  • (optional): any IDE (e.g. VS Codium)
  • Red Hat OpenShift: 4.12
  • Red Hat OpenShift Service Mesh: 2.3

Code Base:

You can either:

  • continue from the previous blog and clean up:
    • remove all the configuration settings that are related to the JWT and OIDC:

    • remove the “oidc” extension from ServiceA
  • or clone the code base from here to have a clean start


We will explain step-by-step how you can achieve multi-service RBAC with Basic Authentication. If you are only interested in the end result, you can clone this from git here.

Setting up Service Mesh on OpenShift

  1. Go to the OpenShift Web Console and make sure to be in the “Administrator view”
  2. There, you should see a menu item “Operators”. Click on “Installed Operators”

  3. There are no operators yet installed that are required by the Service Mesh. These prerequisite operators will be installed now:
    • ElasticSearch Operator
    • OpenShift distributed tracing platform
    • Kiali

      For each of them, go to the “Operator Hub” (in the menu “Operators”) and search for the name, e.g. “Elastic Search”. Make sure to choose the “Red Hat” version of the operator and click on “Install” and again “Install”

      After some time, all 3 operators should be successfully installed.

  4. Now, install the Service Mesh with the operator – exactly the same way as the prerequisite operators.

  5. Create a project to house the central ServiceMesh components:
    • Click in the menu “Home -> Projects”
    • Click on the blue botton (top right corner) “Create project”
    • Call the project “istio-system” and click on “Create”

  6. Create the Service Mesh components:
    • Go to “Operators -> Installed Operators”
    • Click on the “Red Hat OpenShift Service Mesh”
    • Make sure that you are in the project “istio-system”
    • Switch to the tab “Istio Service Mesh Control Plane” and click on the blue button “Create ServiceMeshControlPlane”

    • Click “Create”
    • Click on the newly created “ServiceMeshControlPlane”. You can see that there are a lot of resources created and started.

Congratulations! You have successfully installed a Service Mesh on Red Hat Open Shift.

Deploying the services on OpenShift

Now, we need to also deploy ServiceA and ServiceB to OpenShift – with just a little tweak to make them part of the ServiceMesh.

  1. Create a project in OpenShift to house ServiceA and ServiceB

  2. We just need to specify that this project shall be managed by the Service Mesh – in other words to include this project to the ServiceMesh Member Role:
    • Switch to the “istio-system” project
    • Go to “Operators -> Installed Operators” and click on “Red Hat Openshift Service Mesh”
    • Switch to the tab “ServiceMesh Member Roll” and click the button “Create ServiceMesh Member Roll”
    • Enlarge the sections “members” and enter the name of your project, e.g. “rbac-service-mesh”
    • Click on “Create”

  3. Now, we will deploy the services as usual to OpenShift:
    • Make sure that you have a connection to the OpenShift cluster and that you are pointing to the right project
    • Add the following settings to the file:

      For ServiceA:"org.acme.ExternalService".url=http://serviceb

      For ServiceB:


      Most of the settings we know already from previous posts. Some are new:
    • This is required if the cluster is working with self-signed certificates
    • quarkus.openshift.annotations.""=true: This is required in order to flag this component as part of the Service Mesh

      We can easily validate whether this has worked out, by checking whether a side-car container has been automatically started in the same pod:

      Go to the “Workload -> Pods”
      • Click on the ServiceA or ServiceB pod
      • Scroll down to the “Containers” section:

        As you can see, there are 2 containers:
        • serviceb
        • istio-proxy

  4. Adding a VirtualService and Gateway to our Service Mesh:
    In order to realize the Service Mesh flow, we need to add 2 objects that act as an ingress:
    • In the OpenShift Web Console click on the + sign (right top corner):
    • Copy & paste the following 2 Kubernetes resources into the editor and click “Create”

kind: Gateway
  name: servicea-gateway
    istio: ingressgateway # use istio default controller
  - port:
      number: 80
      name: http
      protocol: HTTP
    - "*"
kind: VirtualService
  name: servicea-gateway
  - "*"
  - servicea-gateway
  - match:
    - uri:
        prefix: /servicea
      uri: /
    - destination:
        host: servicea
          number: 80Code language: PHP (php)

If the creation was successful, you should see the following confirmation screen:

Testing the Service Mesh

Now, let’s test the Service Mesh:

  1. Accessing the Service Mesh Ingress:
    Maybe you have spotted that we have NOT exposed the services via OpenShift routes (as in the previous blog). The reason is that the Service Mesh works with its own ingress – the 2 objects that we have created above (Gateway and Virtual Service):
    • In the CLI enter:
      export GATEWAY_URL=$(kubectl get route istio-ingressgateway -n istio-system -o=jsonpath="{}")/servicea
    • Try to access the Service Mesh ingress:

      curl $GATEWAY_URL/serviceA/userEP

      This should bring back:
      I greet you because you are a user!

  2. (optional) Check out the tracing data:
    • In the OpenShift Web Console, switch to the project “istio-system”
    • In the “Administrator” perspective, go to “Networking -> Routes” view

    • Click on the “jaeger” route location
    • Login again with the OpenShift credentials and accept to give access
    • In the Jaeger GUI, select the Service “servicea.rbac-service-mesh” and “Find traces”

    • You get a nice overview about all the traces and can further drill down.

  3. (optional) You can also explore the other capabilities of the Service Mesh by opening the Kiali, Grafana or Promotheus GUI.

    Particularly, Kiali provides some nice visualization and statistics about the flow.

Activating RBAC for the Service Mesh

Now, we have deviated a bit from our original topic – RBAC. We want to enter our Service Mesh with a JWT and configure the Service Mesh that certain role policies are enforced.

You might already have guessed how this will be accomplished. The code itself will not be touched. All policies and enforcements will happen via Kubernetes objects.

Currently, the access works without any restrictions. Let’s now add a policy that requests a certain role to access endpoints of our services, e.g. the policy for userEP would be:

kind: AuthorizationPolicy
  name: rbac-policy-userep
  namespace: rbac-service-mesh
    matchLabels: servicea 
  action: ALLOW
    - to:
        - operation:
              - GET
              - '*/userEP'
        - key: '[role]'
            - customerCode language: JavaScript (javascript)


  • selector-> matchLabels -> servicea:
    We are only protecting the entry (“ServiceA”).
  • We will use an existing JWT which contains the role “customer”, thus we are using this role as an access condition.

Moreover, we need to configure the ServiceMesh to find the validation location the JWT.

We don’t want to spend too much time setting up the JWT and the associated key sets (JWKS), but reuse some existing ones. If you are interested in all the details, please check out my previous post about JWT.

Let’s just add this object to our namespace to get this accomplished:

apiVersion: ""
kind: "RequestAuthentication"
  name: "user-jwt"
  - issuer: "[email protected]"
    jwksUri: ""Code language: JavaScript (javascript)

Testing RBAC with the Service Mesh

Now, we want to test the functionality.

  1. You can use an existing token that contains the role “customer”:

    token=$(curl -s)

  2. Now, you can test the different end-points:
    • without token:
      • userEP: HTTP 403
      • adminEP: HTTP 403
    • with token:
      • userEP: HTTP 200
      • adminEP: HTTP 403


A Service Mesh is a very convenient way to manage micro-service applications by spanning an overall governance layer. This governance layer can also be used for RBAC.


  • RBAC policies are native Kubernetes objects and can thus nicely be managed like other Kubernetes objects of the project (e.g. via gitops)
  • The code doesn’t need to be polluted with any annotations or commands
  • If there are changes, the application doesn’t need to be redeployed, nor restarted


  • The policies are defined by the Service Mesh implementation and might face certain limitations (e.g. currently in Istio no regex matching for paths supported – see
  • The RBAC rules are only applied at the entry of the Service Mesh (Virtual Service, Gateway) and not for downstream services.

One reply on “How to secure microservice applications with role-based access control? (7/7)”

Leave a Reply


Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.