How to secure microservice applications with role-based access control? (6/7)

May 15, 2023
Tags: , , ,

Option: API Gateway

Foto Source: Jeswin Thomas (www.pexels.com)

In the last blog “OpenID Connect & Keycloak (part 5), we have described how a 3rd party component (Keycloak) can provide intermediary services for trust and security. Amongst others, keycloak takes care of the generation and configuration of the JWT token. The good thing: As all involved services trust Keycloak, they don’t need to establish trust amongst each others. This is particularly relevant, the more services participate in communication chains.

But this of course requires that each service knows how to “talk” OpenID Connect and Keycloak, which is obviously a show-stopper for applications that don’t support OpenID Connect and/or Keycloak. Either because:

  • they are based on legacy technology and therefore adding this functionality would not be viable
  • they are developed externally and are a black-box which can’t (easily) be extended

For these scenarios, a “proxied” solution with an API Gateway might do the trick. That means the application is not touched and a separate component (“API Gateway”) takes care of the communication with Keycloak and enforces RBAC policies.

In Part 1, we’ve provided the context for the whole blog series. We recommend you read it first because otherwise, you miss out on the context.

You find below an overview of the content of these blog series. Just click on the link to jump directly to the respective blog part:

Blog PartImplementation OptionDescription
(2/7)HTTP Query ParamThis is the most basic module where the “role” is transferred as a HTTP Query Parameter. The server validates the role programmatically.
(3/7)Basic AuthenticationA user agent uses Basic Authentication to transfer credentials.
(4/7)

JWT

A JSON Web Token (JWT) codifies claims that are granted and can be objectively validated by the receiver.
(5/7)OpenID and Keycloak

For further standardization, OpenID Connect is used as an identity layer. Keycloak acts as an intermediary to issue a JWT token.
This blogProxied API Gateway (3Scale)ServiceB uses a proxied gateway (3Scale) which is responsible for enforcing RBAC. This is useful for legacy applications that can’t be enabled for OIDC.
(7/7)Service Mesh

All services are managed by a Service Mesh. The JWT is created outside and enforced by the Service Mesh.

What do we want to achieve in this blog part?

The integration with OpenID Connect and Keycloak is relatively easy. For most of the programming languages and components, out-of-the-box integration libraries exist that make the implementation straight-forward. But there are situations where services can’t be amended. For these scenarios a “proxied gateway” solution might be the only possibility to incorporate also legacy services into a common security infrastructure.

In this blog part, we will simulate that ServiceB is a legacy service that can’t be amended and will be proxied by an API gateway. The to-be architecture is depicted in the below diagram:

Architecture Overview:

In this blog, there is an additional component brought into the overall picture. Compared with the securitization flow from the previous blog, the API gateway takes over most of the duties from ServiceB:

  • (1) One of the secured end-points of ServiceA is called

  • (2) ServiceA redirects to Keycloak and asks for authentication

  • (3) Keycloak does the authentication (in the most simple form via a Login page) and sends back a JWT token – of course only in case the authentication was successful. The content of this JWT token is specified in Keycloak (e.g. audience of the token, roles,…).

  • (4) ServiceA uses the JWT token to call the API gateway of ServiceB

  • (5) The API gateway validates the token. That means, it checks that the token has been issued by Keycloak and also contains the right “claims” to assess the end-point

    (Remark: The called service could potentially also delegate the validation to keycloak for each request. This would be cleanest design, particularly if tokens have short validity durations. The downside is more traffic and higher reliance on keycloak.)

  • (6) If the validation is successful, the API gateway passes the request on to ServiceB.


In comparison to the previous scenario, the following changes need to be implemented:

  • In order to simulate a “black-box” application, we will strip of any RBAC rules from ServiceB
  • We need to add an API Gateway (3Scale). For the ease of use, we will use the freely available “Red Hat OpenShift API Management” managed service.
  • As this component needs to communicate with other components, we can’t easily deploy everything locally. Thus, we will use Red Hat OpenShift as a container platform. For convenience, we will use the same OpenShift cluster as the API service.

Prerequisites

We are using the following tools:

Code Base:

As we will re-use almost all code and configuration settings from the

  • previous blog and do the following clean-up:
    • delete all annotations from ServiceB, e.g:

      @RolesAllowed("admin")
    • remove the extension “oidc” from ServiceB

  • or clone the code base from here to have a clean start

Implementation

We will explain step-by-step how you can achieve multi-service RBAC with an API gateway as a proxy. If you are only interested in the end result, you can clone this from git here.

As we cannot mock all components locally, we will move to a central platform. Because of the easy integration, this central platform will be Red Hat OpenShift, an enterprise-ready Container & Kubernetes platform.

Red Hat provides a “Developer Sandbox” which is a hosted and managed environment. This also includes the API Management solution Red Hat 3Scale. The first step is to register for this environment and get it provisioned.

Moreover, we will use the Red Hat Build of Keycloak (recently called “Red Hat – SSO”).


Activating the “Red Hat OpenShift API Management” service

  1. Go to https://developers.redhat.com/products/red-hat-openshift-api-management/getting-started



  2. Click on “Try API Management”:
    • Register (if you haven’t done this earlier with Red Hat)
    • Log in with “DevSandbox”



  3. You should see an OpenShift Console with an empty project (e.g. “[username]-stage”)

  4. You have to create a “APIManagementTenant”.
    • In the menu (left side) click on “Search”
    • In the Search pane, filter for “Resources” APIManagementTenant



    • Click on “Create APIManagementTenant”

      This displays a YAML manifest. You can leave the defaults.



    • Click on “Create”
    • Click in the “YAML” tab and it should show you in the “privisiongStatus”: 3Scale account ready
    • Copy the “tenantURL” and open it in a browser

  5. Welcome to the “3Scale API Management Dashboard”. This is the GUI for 3Scale where you can configure the API Gateways that we will need later on.


Congratulations! You have activated the “Red Hat OpenShift API Management” sandbox and can now deploy all your components.

Deploying ServiceA and ServiceB on OpenShift

So far, ServiceA and ServiceB were running locally. Now, we want to build container images and deploy them to OpenShift.

(Remark: We will use the “traditional” way to build and deploy images which usually takes several minutes. If you want to have a more agile way, there are developer tools like odo that can push changes into a running container!)

  1. Add the “openshift” extension to both services

  2. Add the following environment variables to the application.properties file of both services. They guide the naming and configuration of the services. Luckily, we don’t need to deal with Dockerfile and the likes. They are automatically generated by the Quarkus openshift extension:

    quarkus.kubernetes.deploy=true
    quarkus.container-image.group=[name of the OpenShift project, e.g. skraft-dev]
    quarkus.openshift.route.expose=true
    quarkus.openshift.deployment-kind=deployment
    quarkus.openshift.part-of=keycloak-3scale-demo


    For ServiceA add:
    quarkus.openshift.name=servicea
    quarkus.rest-client."org.acme.ExternalService".url=http://serviceb

    For ServiceB add:
    quarkus.openshift.name=serviceb

  3. Connect to the OpenShift cluster:
    The easiest is just to copy the whole command from the “OpenShift Web Console”.
    • In the top right corner (under your username) click on the little arrow…


    • Select “Copy login command”
    • Click on “Display Token”
    • Copy the command from the box “Log in with this token”
    • Paste this command in your terminal.



      It should confirm the successful login and also point (with a star) to the right project (e.g. “skraft-dev”)

  4. Now, you can just use the standard “mvn clean package” command to build the project, the container image and deploy it to OpenShift. Do this for both services.



  5. Go to the “Topologies” view in the OpenShift console and check that everything was deployed correctly.
    • The circles of ServiceA and ServiceB should be darkblue.
    • Click on both circles and choose “View logs” in the right window.



      The logs should indicate that the container has been started and is listening on 0.0.0.0:8080.


    • Test the connection:
      Click on the icon with the arrow associated to ServiceA.



      A new browser window should open up. Add the endpoint “serviceA/publicEP” to the URL. You should see the greeting from ServiceB which indicates that the connection has been established successfully.

      I don't care which role you have. I always greet you!

Congratulations! Now, both services are running on OpenShift and the connections has been established successfully.

Deploying Keycloak / RH-SSO to OpenShift

Of course, we also need Keycloak on OpenShift. So far, we got it conveniently started by the Quarkus Dev Services. Now, we need to take care of this ourselves. At least, that’s a good practice to see what’s actually going on.

There are many different ways from where and how you can deploy Keycloak. We’ll use a template from the Red Hat Container Catalogue and import it directly from the Web Console:

  1. In the “Developer” view, in the top left corner click on “+Add”:



  2. Click on “Developer Catalog” -> “All Services”

  3. In the filter search for “Red Hat SSO” and choose “Red Hat Single Sign-On 7.5 on OpenJDK (Ephemeral)”



  4. Click “Instantiate Template”

  5. Leave everything as-is, except:
    • RH-SSO Administrator Username: admin
    • RH-SSO Adminstrator Password: admin
    • RH-SSO Realm: opensourcerer

      Of course, you can also choose other values. But please remember it, because we will need it later.
  6. (optional) You can move the circle into the canvas of the “keycloak-3scale-demo” by holding SHIFT and drag and drop the “sso” circle into the canvas.

    (Moreover, you can also add arrows between the services, but this is just for visualization and has no effect.)

    The end result should look like this:




    ServiceA, ServiceB and sso (keycloak) are up and running (visualized through the dark-blue borders of the circle).

  7. Testing the connection with sso / keycloak:
    • Click on the arrow (right top) of the sso cycle.



    • Then, click on the “Administration Console” and login with “admin/admin” (or the username / password that you have chosen when you were instantiating the template)

    • Check that RH-SSO opens with the “opensourcerer” realm selected



    • Click on “OpenID Endpoints Configuration”



      This displays all the important end-points.

    • Copy the “Issuer URL” and open it in a separate browser window. It should display meta-information in json about the realm.

Congratulations! You have successfully deployed RH-SSO and it is reacting to requests!

Configuring the API Gateway for ServiceB

Instead of connecting the 2 services with keycloak, we will now utilize a different approach. In fact, an API gateway will act as the main intermediary. Now, we will configure this API gateway:

  1. Open the 3Scale Dashboard



  2. We will now create a so-called API Product which includes an API backend that points to ServiceB. This might seem a bit over-complicated, but allows for defining very rich APIs that provide a common facade to many different back-end systems.
    • click on “Create Product”
    • Choose “Define manually”

      (You could also flag your ServiceB deployment with certain labels/annotations as described here to have 3Scale automatically discover!)
    • Provide a “name”, e.g. “serviceb-API-product”
    • Click on “Create Product”



  3. Add a backend to the newly created API product
    • Click on the newly created Product and expand the “Integration” menu (on the left side)



    • Click on “Backends” and then on the blue button “Add backend”

      Here, we could re-use already defined backends. So far, only the default “echo” backend is present. Thus, we have to create a backend that points to the ServiceB endpoint.

    • Click on “+ Create a backend”
    • Provide a name, e.g. “serviceb-backend”
    • Private Base URL: “http://serviceb.skraft-dev.svc.cluster.local”

      As the API gateway is running on the same OpenShift cluster, but in a different namespace, we need to use the namespace scope in the URL.

    • click on “Create backend”
    • Now, we have to anchor this backend within the directory structure of our API product. We keep it simple and just put it to root path. Click on “Add to product”.




  4. (optional) You could add additional “Policies”, “Mapping Rules” and “Methods and Metrics”. These are not needed at this stage.

  5. Click on “Configuration” and “Promote v1.0 to Staging APIcast”

    This adds to the existing 3Scale API Gateway a route for our ServiceB.

  6. Let’s test this out by copying the “Example curl for testing” (displayed in the section “Staging APIcast”) into a browser.

    We will get a “Authentication failed”, as per default all endpoints are secured. But at least, we can already ping the end-point.

  7. If we click on “Analytics -> Integration Errors”, we should see a log entry with an error message!



    This already gives a hint why the autentication has failed. The “user key” hasn’t been initialized yet. This is what we are going to do in the next section.

Testing the API Gateway with user key

Before, we start to enable our API product for OpenID Connect and integrate it with Keycloak, let’s define a basic authentication with a basic key. This will allow us to test the integration.

Let’s first clarify some terms that play an important role in 3Scale:

What is an application plan?

An application plan is an object in 3Scale to manage who is allowed to access an API product and how. This includes amongst others:

  • Rate limits
  • Credentials
  • (optional) Costs

What is an application?

Applications are consumers of the API products that subscribe to an application plan. Again, this sounds a bit over-complicated, but allows addressing complex use cases for different types of internal and external consumers of your APIs.

  1. Let’s create a basic application plan:
    • Click in the menu on “Applications -> Application Plans” and then on the blue button (top right corner) “Create application plan”
    • Choose a name, e.g. “ServiceB_ApplicationPlan_Basic”
    • (optional) You could specify Trial periods, Setup fee, Cost per month, etc.
    • Click on “Create application plan”
  2. Now, consumers can “subscribe” to this application plan. Don´t get confused by the terminology. It is basically an object that links consumers of the API with an application plan.
    Please note: These consumers are not application users, but usually software developers who are implementing an application that consumes this API.

    Let’s now create an application:
    • In the menu click on “Applications” -> “Listings”
    • Click on “Create application”
    • Choose the following properties:
      • Account: [choose one of the existing accounts, e.g. “Developer”]
      • Application plan: [choose the application plan that we have created in the previous step]
      • Name: [choose a meaningful name]
      • Description: [choose a meaningful description]



    • Click on “Create application”

  3. As we have left the default Authentication (“user_key”), the application automatically generates a “User Key” which needs to be added to the URL in order to access the API Gateway of ServiceB.



  4. Let’s test this connection:
    • Go back to main Dashboard (click on the “Red Hat 3Scale API Management” icon in the left top corner)
    • click on the API product that we have created
    • click in the menu on “Integration -> Configuration”
    • You see that the “example curl for testing” has already been amended with the user_key query parameter.

  5. If we add this User Key to the URL, we should be able to access the end-points from ServiceB – via the API Gateway! The URL should be something like serviceb-api-product-xxx-apicast-staging

  6. (optional) You can now check that the API gateway was hit successfully.
    • Go to Analytics -> Traffic:


  7. (optional) You can add some features to the application plan, e.g. limits:
    • Click on the Application Plan that you have generated earlier
    • Click on “Limits”
    • Click on “+ New usage limit”

    • Choose a “period” and “max value”
    • Click on “Create usage limit”



      This will be in effect immediately. No need to republish anything!

Congratulations! You have established the complete communication flow from ServiceA -> API Gateway -> ServiceB, everything deployed on OpenShift.

Configuring OpenID Connect for the API Gateway

But, what about keycloak and OpenID Connect. Well, for testing purposes, we have only used a simple UserKey for Authentication.

Now, we need to change the authentication settings from “user_key” to “OpenID Connect”. This will also introduce keycloak in the picture.

Set-up automatic syncing between RH-SSO and 3Scale

The standard procedure would be now to exchange keys between 3Scale and RH-SSO to enable secure communication and then create a client in RH-SSO that corresponds to the API gateway. But all this can be automated, which is particularly helpful if there are multiple API end-points that are managed by 3Scale. The next steps only have to be performed once to enable automatic syncing.

  1. Go to the RH-SSO admin console (Login with is “admin/admin”)

  2. Create a new client:
    • Client ID: zync-sso
    • Access Type: confidential
    • Standard Flow Enabled: Off
    • Direct Access Grants Enabled: Off
    • Service Accounts Enabled: On

  3. After saving, click on the tab “Service Account Roles”
    (is only visible if you have turned “Service Accounts Enabled: On”)

  4. Click in “Client Roles” and search for “realm management”. Then choose the role “manage-clients” and click on “Add selected >>”



  5. From this client you need to copy the credentials which we will need in the next section:



Enabling the API product for OpenID Connect

Now, that we have automatic syncing enabled, let’s go back to 3Scale and switch the previously created API product to “OpenID Connect” Authentication.

  1. In the 3Scale Dashboard, go to “Integration” -> “Settings”

  2. Scroll down to “Authentication” section: Switch from “API Key (user_key) to “OpenID Connect”

  3. In the “AUTHENTICATION SETTINGS”:
    • For “OpenID Connect Issuer Type” choose “Red Hat Single Sign-On”
    • For the “OpenID Connect Issuer” we need point to the synchronisation client (“zync-sso”) that we just have created in sso.

      The format is:
      https://[client_id]:[client_secret]@[SSO_URL]

      whereas:
      • client_id is “zync_sso”
      • client_secret is the secret of the zync_sso client
      • SSO_URL is the URL of the sso including the realm. This can be found in the RH-SSO admin console under “Configure” -> Realm Settings” and clicking on “OpenID Endpoint Configuration”



        This opens up a browser window that provides the “issuer” information (first entry):



    • For “Credentials Location” choose “As HTTP Headers”

  4. Click on “Update Product” (at the bottom).

  5. As we have done changes, we need to re-deploy (“Promote”). Thus in the “Configuration” view, click on “Promote v.2 to Staging APIcast”.

  6. Now, we should have configured OpenID correctly and the application should now provide a “clientID” and “Client Secret” (instead of a user_key).

    Let’s go to the Application that we have previously generated.

  7. You should see the credentials instead of the user_key:



    (Remark: In case, still the user_key is shown, the synchronisation can be triggered by editing the application! This is something we are going to do in any case in the next step!)

  8. Let’s edit the application, by adding the “Redirect URL”. This is required as sso needs to know where to redirect after authentication.

    Click on “edit” and copy/paste the URL of ServiceA.

    (Why ServiceA? Well, because the redirecting will happen with ServiceA!)

    Click on “Update”

Test ServiceA with the OpenID credentials

Now, the automatic synchronisation should kick in and a OpenID client should automatically be generated in RH-SSO. As a next step, we need to provide these to our ServiceA to be used for authentication.

Check the OpenID client in the RH-SSO admin console

  1. Click on “Clients”
  2. Look for a client that should have the “client id” and “client credentials” that are matching with the properties of the Application that has been created in 3Scale.

Great! The automatic synchronization has worked!

Configuring ServiceA to use the API gateway

So far, ServiceA still calls ServiceB directly. This, we want to change. ServiceA will now obtain a JWT from keycloak (with the client id that was generated by the 3Scale application) and will then call the API gateway end-point.

  1. For ServiceA, we need to add the “oidc” extension
  2. Moreover, we need to add configuration settings:
    • point to the sso server:
      quarkus.oidc.auth-server-url=[URL of sso]

      (Remark: You have to copy the whole URL, including the “https://” – prefix)
    • configure ServiceA to authenticate to sso with the credentials that have been generated by 3Scale:
      quarkus.oidc.client-id=[client_id]
      quarkus.oidc.credentials.secret=[client_secret]
    • point to the API gateway:
      quarkus.rest-client.”org.acme.ExternalService”.url=[URL of the API gateway]
    • In order to obtain a token, we will configure it as a Web-Application and specify the userEP and adminEP as “authenticated”:

      quarkus.oidc.application-type=web-app
      quarkus.http.auth.permission.authenticated.paths=/*
      quarkus.http.auth.permission.authenticated.policy=authenticated
    • We also have to add a Java class that adds the JWT to the Request Header.

  3. Redeploy the client with “mvn clean package

  4. Try to access the end-points. If everything works fine, you should be re-routed to the RH-SSO login page. The question is now which user we need to use to login?!

    Don’t mix this up with the API consumer user (“jdoe”) we have used before to define the 3Scale Application that contains the OpenID credentials.

    We need to create application user(s) to test our service-to-service communication. Let’s quickly do this!

  5. Go to the “RH-SSO” console and create the following resources:
    • User: user
    • User: admin
    • Role: users
    • Role: admins

  6. Add the role “users” to “user” and “admins” to “admin”

  7. Now, access any of the end-points!

Congratulations! Now, we have the whole architecture deployed!

That’s great! But, wasn’t there something about RBAC?

Adding a policy for RBAC in the API gateway

So far, we have established a validation of OpenID Connect by the API gateway. But there is no limitation yet based on the role.

Thus, let’s add a policy to check for role “users” for the “/userEP/” endpoint:

  1. In the 3Scale dashboard, go to “Integration -> Policies” and click on “+ Add policy”


  2. You see a list of out-of-the box policies that can be added to each API. The one that we will use is “RH-SSO/Keycloak Role Check”. Find it and click on it.

    This will automatically add this policy to the policy chain. Please note that the order is important and can be changed. In our case, the newly added policy should be AFTER the “3Scale APIcast”.

  3. In order to configure the policy, we need to click on it. Now, we have several options:
    • whether we want to have a “whitelist” or “blacklist” policy: We should choose “whitelist”
    • what “scope” we want to have. There we have basically 2 options:
      • CLIENT_ROLES
      • REALM_ROLES

        These are concepts of OpenID Connect and a discussion would go beyond the scope of this blog. We have to choose “REALM_ROLES” as we have specified “users” and “admins” realm-wide (and not only for a specific client).
    • In the “REALM_ROLES” section click on the “+”:
      • Choose “name_type”: Evaluate “value” as plain text
      • Enter “name”: “admins”
    • For the “resource” enter “/serviceB/userEP” as this policy should only apply to this path.
    • For the methods you can keep “ANY”.

  4. Click on “Update Policy” and “Update Policy Chain”

  5. In the “Configuration” tab click on “Promote to Staging APICast”. This puts this policy into effect.

  6. Test again all end-points with user “user”:
    • userEP should work
    • adminEP should NOT work

(Hint: If you want to create more complex policies, it is easier to use the 3Scale Toolbox which is a CLI-based utility to import/export 3Scale objects.)

Conclusion

This was quite some additional set-up. But we have achieved now a complete decoupling between a newly introduced API layer and the service layer. Very nice!

Through this, we have full control over the API, can monitor the usage, recombine different end-points to a new API, etc. This is particularly useful for implementing the “Backend for Frontend” pattern.

One reply on “How to secure microservice applications with role-based access control? (6/7)”

Leave a Reply

close

Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.