Old-School vs. New-Technology Security – Why security still matters in a Cloud Native world.

July 12, 2021

If the COVID-19 epidemic taught us one lesson, then it is how dependent we are on data exchange over the internet. Just think back to yourself, how did you pass by 2020/2021? How did you exchange data with service providers or your employer? How did you entertain yourself while being under lock down or how did you connect to your loved ones and friends? Not all but most answers will result in some kind of service connected to the internet. And since this data became the new oil or gold for companies and hackers alike, it needs protection. Protection in the form of IT security or Security Tools.

IT Security comes in many different forms. While some companies or developers try to sell you their solutions as the silver bullet, I love to burst that bubble: There is no perfect “all in one” solution to protect your data. Security should be seen as a multilayered approach, starting from PHYSICAL (yes IRL physical) access to data storage facilities up to the bits and bytes we exchange. If you are having a hard time to imagine a multilayered security approach, think of a Hotel. In a good hotel, not everyone is allowed to access the facilities, guests are checked for their reservations. Then with a valid booking you usually get access to certain locations within the hotel. Of course there is your room, where only you and the personal (cleaning staff) have access during your stay and to top it all off, there is the room safe where you only have access during your stay in the hotel. In short, you have multiple occasions, where access restrictions are in place and the access rights get slimmer and slimmer the more personal they get for you.

Modern Attack Patterns

Attackers will not simply jump into your business lobby and enter a PC like in action movies from the early 90s (as referred to in The Cuckoo’s Egg or Hackers). Today’s attacks are highly sophisticated and (also) multi-layered. They usually start with intelligence gathering, by spying on targets online and sometimes offline, water-hole-attacks (wait on a common place like a forum to spy on your prey) or by sending spam. When enough data is collected, the next phases of finding loopholes are at the human layer (phishing) or in their IT-security. From there a remote-access is usually set up or let’s say “a way to exit with your loot easily and quickly without any noise”. Then the attacker snooping fun starts, jumping from internal system to internal system to gather as much valuable information as they can get. With them collected, it’s off to the dark web, time to start an auction.

Multi Layered Security Concept

So let’s have a look at multi-layer security approaches in the modern IT world to tackle those sophisticated attacks:

  • Human Layer – The infamous “Layer 8”. Yes, we as humans are part of every security effort, and it usually boils down to simple common sense (you DON’T have to click on everything, no you DON’T let a pizza delivery service into the office after 10pm, etc.)
  • Perimeter Security – Fences, Locks, Key cards, IDs, all those are common in every person’s life. Sadly it became so common that it also became a big security threat in our modern times. We believe a strong wall will keep out the bad guys. That may be true, but what about their droppings? E.g. the infamous USB-stick in a parking lot?
  • Network Security – Firewalls, IDP/IPS, Proxies, Sandboxes and so much more. It also became a commodity to have not only physical perimeter protection but also virtual protection in the form of Network Security. Due to lateral movement, which can be a disgruntled co-worker digging for gold for his/her next employer or an attacker, the Network Security portfolio became even wider and includes the internal network traffic for protection as well.
  • Endpoint Security – The classic, Anti Malware protection, Application guidelines (which software can you run or even install), data protection (which data can you copy, open or consume), User access and rights management, 
  • Application Security – This is usually seen as an extra layer of protection next to the classic Anti Malware protection. You want to make sure that only signed software or software your company allows is running on your business computers. Sometimes attackers attach additional payload to well known programs. On the outside, the program looks normal, but in the back, extra programs like trojans are executed.
  • Data Security – Businesses live from collecting, sharing and working with data, it is just a fact. May it be something simple like customer addresses or something very critical like unreleased patents or other inventions, we are working with data. Data protection usually makes sure that certain data is not to be shared or accessed by the wrong people. It is also an effective way to fight phishing attacks. With Data Security you can make sure that even when “urgent boss requests rush in, you do not share them blindly”.
  • Mission Critical Assets – “The crown jewels”, “Your bread and butter” call them whatever you like, but all the mentioned solutions or let’s put it negatively “attack vectors” above are in front of them. And they must be secured, with one or many solutions.

So now that we brushed up our knowledge in regards to multi layered attack patterns and multi layered security aren’t we missing something? Shift-Left? Containers? DevOps without Sec?

A day of a life as developer (without security)

You might say: Hey all this fancy new development world is already secured by the things you mentioned above, so what’s the worry? In real-life, the security methods do not always play nice with a modern container technology world. 

I used to work as an IT-admin within a R&D facility. We had the newest and best hardware and software money could buy, we had all those fancy stuff mentioned above. Sadly, the developers were treated as “small gods”, since they brought in fresh research results and therefore more money. So when a developer said: “This security stuff is slowing me down! It hinders my work! This is no good” Security tools were turned off!! 

Where could that lead us? Let’s say those developers would work, develop and surf on those now highly insecure machines. Somehow they got their hands on a solution to a current problem within “the last pages of google search”. The solution comes in the form of a binary that solves a certain problem within his application. He transforms the app into a container. The container image is based on a very old version he found on a public registry, well the version works for him, so why bother, he just adds another binary so what could be the harm?

Within this little story, we have complete new risk pattern:

  • The binary the developer found could be really a solution, but maybe with some extra baggage, like a CryptoMiner or a RemoteAccess tool.
  • The old container image could include extra libraries and binaries which hold vulnerabilities, which might be already fixed!
  • With the disabled security nothing is seen at the stage and sadly it won’t on later stages as well, because container runtimes and orchestrations do _not_ come with deep scan security methods by default!

And all our other fancy security layers? None of them as stated as above are trained to “unpack” container images or integrate into container runtimes. This could result in a lot of costs in the form of outages of the application due to fixes or even worse, loss in reputation when the dangerous app leaks data.

We need to Shift Left

The question all developing companies must ask themselves is: How can I include my developers into my multi-layered security approach without hindering their work? The answer is to shift left! In simple terms, instead of looking (only) at runtime security “on the right” look at the origin of code and applications “on the left” too. Shifting left approaches are:

  • Source Code Scanning: Scan your source code before it gets in whatever build form. This reduces fixing costs dramatically, as you check your code right from the start. An easy to use and simple tool is KubeLinter. It can be used to scan your Kubernetes YAML files for best practices and security optimizations. 
  • Scan your builds: Within CI/CD pipelines containers became a key factor in gaining speed and flexibility. But those containers should not be like the one I used in my example. Usually in a modern cloud native development environment you facilitate building containers to transform your code into a final result, the result should be scanned before it can move into your image repository or onto your cluster, so scanning during build time is another “left factor”.
  • Scan during deployment: You are not always building your own applications, sometimes you consume them by third parties, like an external development team or from an external image registry. Still you want to make sure that the container does not run when it’s insecure. This is when scanning during container deployment comes in.
  • Scan during runtime: Let’s be clear about something: A container is a bunch of processes running in a closed small environment, established by kernel features. They are not meant to maintain, upgrade or log into. That also means containers are not meant to be installed with security components in them, like Anti malevolent software and such. This is why you need a solution which facilitates container orchestration features to check your containers at runtime. With this approach you do not need to hassle your developers with “adding extra security” which Ops are usually known for.
  • Compliance Checks: This is actually an old concept, adopted to the container world. Containers can only be accessed by certain configurations of your container runtime, container orchestration or your deployment settings. Even though your application runs and is secured, it might still have an unnecessary access right or open port. Here reporting and enforcing solutions can help to protect your assets from unnecessary exposure.

Security is expansive, hard to integrate and overall not attractive

When (that is “if”) you started your container journey, I bet the last thing you were thinking about was security. You would rather analyse your app, can it be in a container, what can be my benefits of containerization and then there is, how can I run my containers in a scalable environment. But security? Yes of course the environment should be secured, but how?

By now Kubernetes is a container orchestration standard and with it comes a whole mountain (chain) of new possibilities and needs to learn so you can actually and properly use it for your benefits. A lot of those starting, learning or growing pains can be taken from you by orchestration platform solutions such as OpenShift. It is “ready to use” Kubernetes enriched with Development tools like pre-certified build containers and container orchestration basics equipment such as image registries, pre-configured and pre-designed by Red Hat. Great, that really is one pain point less in the complete journey picture. Even though OpenShift is by default “designed for security” it still has so many “open by default settings” which basically come with Kubernetes, such as Network Policies.

Security in this environment must be smart, easy to adapt and use, and ready to go. The perfect solution would also not rob you of your precious resources but rather use working standards and make them “actually usable”!

It’s time to have a look into Advanced Cluster Security.

With Red Hat’s acquisition of StackRox early 2020, StackRox was adopted into the Red Hat family, the OpenShift strategy and therefore became Red Hat Advanced Cluster Security (RHACS). 

RHACS comes with a big benefit, it leverages features of Kubernetes such as Network Policies. RHACS itself will monitor that security changes are upheld, but it is very lightweight in it’s installation to tackle one of the problems I stated before: Security should not be cumbersome on your infrastructure. Here is the basic architecture of RHACS, which you can read through in its entirety here.

So how can RHACS support the shift left actively? Let’s have a look at a pipeline example. In my demo, I expect no images to be deployed which contain fixable vulnerabilities. So I created a policy for exactly that.

Now I cannot build any images, which will include fixable vulnerabilities in my cluster. This does not only provide an extra layer of security but also gains me another point in certain compliances.

So how does this affect my developers? I stated before that developers should not be hassled with blockings, right? Well in this case even if we block the developer, we even gave him the power to “unblock” or fix his problem. Let’s see how:

[1m  apk-tools 2.12.1-r0
[0m[1;31m    CVE-2021-30139 (CVSS 7.5) (Severity Important) - fixed by 2.12.5-r0
[0m      * In Alpine Linux apk-tools before 2.12.5, the tarball parser allows a buffer overflow and crash.

[1m  busybox 1.32.1-r3
[0m[1;31m    CVE-2021-28831 (CVSS 7.5) (Severity Important) - fixed by 1.32.1-r4
[0m      * decompress_gunzip.c in BusyBox through 1.32.1 mishandles the error bit on the huft_build result pointer, with a
        resultant invalid free or segmentation fault, via malformed gzip data.

Layer: RUN set -x     && addgroup -g 101 -S nginx     && adduser -S -D -H -u 101 -h /var/cache/nginx -s /sbin/nologin -G nginx -g nginx nginx     && apkArch="$(cat /etc/apk/arch)"     && nginxPackages="         nginx=${NGINX_VERSION}-r${PKG_RELEASE}         nginx-module-xslt=${NGINX_VERSION}-r${PKG_RELEASE}         nginx-module-geoip=${NGINX_VERSION}-r${PKG_RELEASE}         nginx-module-image-filter=${NGINX_VERSION}-r${PKG_RELEASE}         nginx-module-njs=${NGINX_VERSION}.${NJS_VERSION}-r${PKG_RELEASE}     "     && case "$apkArch" in         x86_64)             set -x             && KEY_SHA512="e7fa8303923d9b95db37a77ad46c68fd4755ff935d0a534d26eba83de193c76166c68bfe7f65471bf8881004ef4aa6df3e34689c305662750c0172fca5d8552a *stdin"             && apk add --no-cache --virtual .cert-deps                 openssl             && wget -O /tmp/nginx_signing.rsa.pub https://nginx.org/keys/nginx_signing.rsa.pub             && if [ "$(openssl rsa -pubin -in /tmp/nginx_signing.rsa.pub -text -noout | openssl sha512 -r)" = "$KEY_SHA512" ]; then                 echo "key verification succeeded!";                 mv /tmp/nginx_signing.rsa.pub /etc/apk/keys/;             else                 echo "key verification failed!";                 exit 1;             fi             && apk del .cert-deps             && apk add -X "https://nginx.org/packages/mainline/alpine/v$(egrep -o '^[0-9]+\.[0-9]+' /etc/alpine-release)/main" --no-cache $nginxPackages             ;;         *)             set -x             && tempDir="$(mktemp -d)"             && chown nobody:nobody $tempDir             && apk add --no-cache --virtual .build-deps                 gcc                 libc-dev                 make                 openssl-dev                 pcre-dev                 zlib-dev                 linux-headers                 libxslt-dev                 gd-dev                 geoip-dev                 perl-dev                 libedit-dev                 mercurial                 bash                 alpine-sdk                 findutils             && su nobody -s /bin/sh -c "                 export HOME=${tempDir}                 && cd ${tempDir}                 && hg clone https://hg.nginx.org/pkg-oss                 && cd pkg-oss                 && hg up ${NGINX_VERSION}-${PKG_RELEASE}                 && cd alpine                 && make all                 && apk index -o ${tempDir}/packages/alpine/${apkArch}/APKINDEX.tar.gz ${tempDir}/packages/alpine/${apkArch}/*.apk                 && abuild-sign -k ${tempDir}/.abuild/abuild-key.rsa ${tempDir}/packages/alpine/${apkArch}/APKINDEX.tar.gz                 "             && cp ${tempDir}/.abuild/abuild-key.rsa.pub /etc/apk/keys/             && apk del .build-deps             && apk add -X ${tempDir}/packages/alpine/ --no-cache $nginxPackages             ;;     esac     && if [ -n "$tempDir" ]; then rm -rf "$tempDir"; fi     && if [ -n "/etc/apk/keys/abuild-key.rsa.pub" ]; then rm -f /etc/apk/keys/abuild-key.rsa.pub; fi     && if [ -n "/etc/apk/keys/nginx_signing.rsa.pub" ]; then rm -f /etc/apk/keys/nginx_signing.rsa.pub; fi     && apk add --no-cache --virtual .gettext gettext     && mv /usr/bin/envsubst /tmp/         && runDeps="$(         scanelf --needed --nobanner /tmp/envsubst             | awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }'             | sort -u             | xargs -r apk info --installed             | sort -u     )"     && apk add --no-cache $runDeps     && apk del .gettext     && mv /tmp/envsubst /usr/local/bin/     && apk add --no-cache tzdata     && apk add --no-cache curl ca-certificates     && ln -sf /dev/stdout /var/log/nginx/access.log     && ln -sf /dev/stderr /var/log/nginx/error.log     && mkdir /docker-entrypoint.d
[1m  curl 7.74.0-r0
[0m[1;31m    CVE-2021-22901 (CVSS 8.1) (Severity Important) - fixed by 7.77.0-r0
[0m      * curl 7.75.0 through 7.76.1 suffers from a use-after-free vulnerability resulting in already freed memory being used when
        a TLS 1.3 session ticket arrives over a connection. A malicious server can use this in rare unfortunate circumstances to
        potentially reach remote code execution in the client. When libcurl at run-time sets up support for TLS 1.3 session
        tickets on a connection using OpenSSL, it stores pointers to the transfer in-memory object for later retrieval when a
        session ticket arrives. If the connection is used by multiple transfers (like with a reused HTTP/1.1 connection or
        multiplexed HTTP/2 connection) that first transfer object might be freed before the new session is established on that
        connection and then the function will access a memory buffer that might be freed. When using that memory, libcurl might
        even call a function pointer in the object, making it possible for a remote code execution if the server could somehow
        manage to get crafted memory content into the correct place in memory.
[1;33m    CVE-2021-22876 (CVSS 5.3) (Severity Moderate) - fixed by 7.76.0-r0
[0m      * curl 7.1.1 to and including 7.75.0 is vulnerable to an "Exposure of Private Personal Information to an Unauthorized
        Actor" by leaking credentials in the HTTP Referer: header. libcurl does not strip off user credentials from the URL when
        automatically populating the Referer: HTTP request header field in outgoing HTTP requests, and therefore risks leaking
        sensitive data to the server that is the target of the second HTTP request.
    CVE-2021-22890 (CVSS 3.7) (Severity Low) - fixed by 7.76.0-r0
      * curl 7.63.0 to and including 7.75.0 includes vulnerability that allows a malicious HTTPS proxy to MITM a connection due
        to bad handling of TLS 1.3 session tickets. When using a HTTPS proxy and TLS 1.3, libcurl can confuse session tickets
        arriving from the HTTPS proxy but work as if they arrived from the remote server and then wrongly "short-cut" the host
        handshake. When confusing the tickets, a HTTPS proxy can trick libcurl to use the wrong session ticket resume for the
        host and thereby circumvent the server TLS certificate check and make a MITM attack to be possible to perform unnoticed.
        Note that such a malicious HTTPS proxy needs to provide a certificate that curl will accept for the MITMed server for an
        attack to work - unless curl has been told to ignore the server certificate check.
    CVE-2021-22898 (CVSS 3.1) (Severity Low) - fixed by 7.77.0-r0
      * curl 7.7 through 7.76.1 suffers from an information disclosure when the `-t` command line option, known as
        `CURLOPT_TELNETOPTIONS` in libcurl, is used to send variable=content pairs to TELNET servers. Due to a flaw in the
        option parser for sending NEW_ENV variables, libcurl could be made to pass on uninitialized data from a stack based
        buffer to the server, resulting in potentially revealing sensitive internal information to the server using a clear-text
        network protocol.

[1m  libgcrypt 1.8.7-r0
[0m[1;31m    CVE-2021-33560 (CVSS 7.5) (Severity Important) - fixed by 1.8.8-r0
[0m      * Libgcrypt before 1.8.8 and 1.9.x before 1.9.3 mishandles ElGamal encryption because it lacks exponent blinding to
        address a side-channel attack against mpi_powm, and the window size is not chosen appropriately. (There is also an
        interoperability problem because the selection of the k integer value does not properly consider the differences between
        basic ElGamal encryption and generalized ElGamal encryption.) This, for example, affects use of ElGamal in OpenPGP.

[1m  libjpeg-turbo 2.0.6-r0
[0m[1;33m    CVE-2021-20205 (CVSS 6.5) (Severity Moderate) - fixed by 2.1.0-r0
[0m      * Libjpeg-turbo versions 2.0.91 and 2.0.90 is vulnerable to a denial of service vulnerability caused by a divide by zero
        when processing a crafted GIF image.

[1m  libxml2 2.9.10-r6
[0m[1;31m    CVE-2021-3518 (CVSS 8.8) (Severity Important) - fixed by 2.9.10-r7
[0m      * There's a flaw in libxml2 in versions before 2.9.11. An attacker who is able to submit a crafted file to be processed by
        an application linked with libxml2 could trigger a use-after-free. The greatest impact from this flaw is to
        confidentiality, integrity, and availability.
[1;31m    CVE-2021-3517 (CVSS 8.6) (Severity Important) - fixed by 2.9.10-r7
[0m      * There is a flaw in the xml entity encoding functionality of libxml2 in versions before 2.9.11. An attacker who is able
        to supply a crafted file to be processed by an application linked with the affected functionality of libxml2 could
        trigger an out-of-bounds read. The most likely impact of this flaw is to application availability, with some potential
        impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the
        application.
[1;33m    CVE-2021-3537 (CVSS 5.9) (Severity Moderate) - fixed by 2.9.10-r7
[0m      * A vulnerability found in libxml2 in versions before 2.9.11 shows that it did not propagate errors while parsing XML
        mixed content, causing a NULL dereference. If an untrusted XML document was parsed in recovery mode and post-validated,
        the flaw could be used to crash the application. The highest threat from this vulnerability is to system availability.Code language: PHP (php)

So now in OpenShift, my developer ran his pipeline again. With my policy in place the build fails because the image has several FIXABLE vulnerabilities within. Within the pipeline logs, the developer receives all the package update information he/she needs to bypass this issue. This saves a lot of time in regards to “let’s fix it in production later” and money when the production goes belly up due to attacks or maintenance.

Additionally I get more insight in my Kubernetes cluster with Violation overviews and Network Management. In Kubernetes clusters it is very hard to follow your tracks in regards to intercommunication, container image content and so on.

As an example, I deployed a financial service application in my OpenShift cluster. It manages, transfers and provides financial data, such as credit card (VISA, master card) information. We should check out, if my deployment is secure enough to handle those critical data. With the Violation view in RHACS I can see what my container “contains” as well as what is going on it:

Oh boy, this is my “VISA credit card processor” container transferring critical payment information and it has much going on, from fixable vulnerabilities, to netcat executions to privileged access rights which could enable attackers to hijack my cluster. The Risk view does confirm this:

My “visa-processor” is currently risk number one and I should get going with this one first. Why is that? Because the container image is not only risky but I have interconnections like services, routes, open ports, etc. with this container as well.

How can I check the current traffic now and see if maybe the attacker is already in this container and started the lateral movement stage? For this I can consult my Network Graph:

Here I can see traffic in a great graphical display and it looks not good… in regards to security concerns. The visa-processor is talking to a jump-host which is absolutely not common in a container environment. Luckily I can create Kubernetes based network policies right here to close to all unwanted communications. I do not need to install any proxies, firewalls or whatever. RHACS facilitates the Kubernetes basic functions and makes them usable.

Conclusion

With those very simple examples you can see that multi-layered security approaches live on in the container cloud native world. Don’t think that the old layers are obsolete! They rather are transformed like many other things within IT. In regards to your further IT security ventures you should always try to include all attack vectors. You might have the latest Next-Gen Firewall in front of your Kubernetes cluster, but within Kubernetes, a highly automated and scalable world, you need fitting security measures. 

Also do not underestimate the infamous Layer 8. Every person within IT, may it be a developer or an operations person, they want to do their job. Security must be in place but it must be fitting to their working environment and can not be an obstacle.

One reply on “Old-School vs. New-Technology Security – Why security still matters in a Cloud Native world.”

Leave a Reply

close

Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.