There are many concerns about using Web Application Firewalls (WAF) in Kubernetes; some of them include the difficulty in maintenance that comes with using a signature-based WAF, the increased latency, and false positives, among others.
Yet, 196 out of 250 DevOps, DevSecOps, and Application Security engineers would use a WAF in Kubernetes. This alone highlights its importance.
Read this article to understand how choosing the right WAF can clear all your doubts while protecting your web application against attacks.
Let’s start with a brief definition of Kubernetes and how it works.
Defining Kubernetes and Its Purpose
Before the development of Kubernetes, application development and deployment in cloud environments was often a manual and error-prone process. To deploy and run their applications, developers had to manually configure and manage infrastructure resources, like virtual machines, load balancers, and databases. This process was time-consuming, complex, and required high expertise in managing infrastructure resources.
Several applications and deployment tools were developed over the years to address these challenges, including tools like the following:
Puppet
Chef
Ansible
Docker
These tools allowed developers to automate the deployment and configuration of applications and infrastructure resources, simplifying application deployment and management in cloud environments.
However, these tools had their limitations and challenges. For example, they could not manage and orchestrate containerized applications at scale, leading to portability, scalability, and reliability issues. This led to the development of container orchestration platforms like Kubernetes.
Developed by Google, Kubernetes (K8s) is a cluster orchestration platform that simplifies containerized apps' deployment, scaling, and management. It is open-sourced and helps you add app features in clusters without changing the source code in the repository. Also, under Apache version 2.0, so you can freely use, modify, distribute, and sublicense it.
One of its most outstanding features is its ability to intelligently automate rollouts and rollbacks while being careful not to kill your instances. During rollouts, it replaces Pods in your app's old version in parallel to ensure no downtime. Moreover, by automating tasks, Kubernetes makes it easy to scale your application, speed up the delivery process, and keep your code functional.
In addition to this, Kubernetes assigns a unique IP address to each Pod and a DNS name per set of Pods. This speeds up communication between each Pod, enhances load balancing and Pod security, and easily moves Pods between nodes and clusters.
Why Should You Use a WAF in Kubernetes?
As mentioned earlier, of the 250 DevOps and application security engineers who responded to our LinkedIn poll, 196 would consider using a WAF for Kubernetes. When the remaining 26% were asked why they wouldn't use a WAF, here are the reasons we got:
"Kubernetes provides just enough security to web applications."
“Most WAFs are difficult to maintain, increase latency, and false positives, so we prefer using standalone or native solutions to secure our app and carry out other functions that a WAF offers.”
"We prefer to develop a specific application security framework for our app."
"Why should we pay for a WAF when we can use defense-in-depth computing to protect against attacks."
Note: defense-in-depth is a security approach where multiple layers of independent security models are computed into an application during development to ensure the safety of its data.
Assuredly, some security and app efficiency tasks like rate limiting, load balancing, and request scanning for SQLi and XSS can be configured in the Ingress or independently outsourced to proprietary software solution vendors. However, these options may not offer top-notch security to your web application, which may lead to the following issues:
Data Theft
Malware Injection
Ransomware
Are you looking for a way to block attacks on your web application before they happen? Look no further, as open-appsec uses machine learning to continuously detect and preemptively block threats before they can do any damage. Its code has also been published on GitHub, and the effectiveness of its WAF has been successfully proven in numerous tests by third parties. Hence, try open-appsec in the Playground today!
Given below are some reasons highlighting the viability of using a WAF in Kubernetes.
Protects against zero-days: Many WAF vendors are notoriously known for their inadequate protection against emerging web attacks. On the other hand, some WAFs have been known to successfully sniff out and block zero-day attacks before they can cause harm. One such WAF is open-appsec WAF; it uses a machine learning-based approach to monitor your app’s user behavior and identify the presence of attacks. Similarly, some other WAFs that can be used in Kubernetes are AWS WAF, ModSecurity, Nginx App Protect, etc. If you are still unsure, you can check out the report on how it preemptively blocked zero-days like log4shell, spring4shell, text4shell, and a Claroty Team82 JSON-based SQL injection.
Reduces app developer workload: While some web developers might have some app security skills, they can't be compared to the skills of a team of developers dedicated to ensuring web application security through their WAF.
Cost-effective: Admittedly, using some WAFs can be a financial burden, but you can choose to use open-source WAFs like NGINX Naxsi, open-appsec (whose lower cost could help prevent large financial impacts from malicious attacks), IronBee, Vulture, etc. Read on to learn how to install open-appsec in K8s NGINX Ingress Controller.
How to Install open-appsec WAF in Kubernetes NGINX Ingress Controller?
open-appsec WAF can be integrated with NGINX Ingress Controller to protect web applications and their APIs in the Kubernetes environment. It is also a secure HTTPS load balancer for containerized apps in Kubernetes environments.
In this guide, we'll use an interactive Command Line Interface (CLI) tool to install open-appsec WAF in Kubernetes NGINX Ingress Controller because it's fast and easy to deploy and configure.
Prerequisites
Kubernetes tool, version 1.16.0 upwards
Basic understanding of how Kubernetes, Ingress Controller, and NGINX Ingress work
Role-based Access Control with admin permissions enabled
Install the kubectl command line to manage Kubernetes clusters and run commands against them
Install the wget Linux command line to download files from containers running in a Kubernetes Pod
Installation Process
Download and run the installer (Linux-only, macOS soon) using these commands:
wget https://downloads.openappsec.io/open-appsec-k8s-install && chmod +x ./open-appsec-k8s-install
The interactive installer has three steps:
Step 1: Ingress
The installer will present the available Kubernetes Ingresses in the cluster and suggest two options:
1) Duplicate an existing Ingress and add open-appsec to it. This option allows you to test that all services are properly accessible via the new Ingress while the existing Ingress is up and running without worrying about traffic disruption.
2) Add open-appsec to an existing Ingress resource. This approach is good for a lab, staging, or non-critical production environment.
Note: In the current implementation, the installer will only show existing Ingress resources where the Ingress class name starts with "nginx". If your Ingress resource's name does not match this requirement, you can either rename it or install it using Helm (without the tool).
In both cases, we will automatically add the required annotation linking the open-appsec policy to the Ingress resource, and we will also change the Ingress class specification for the Ingress (either to the copy or to the existing Ingress resource, depending on your choice above) to point to the new NGINX Ingress Controller with open-appsec integration.
Step 2: Policy
The installer will display the default policy and allow you to change it if you wish. When saving, you will be asked whether to save the settings as a manifest (YAML) or Helm chart.
Note: The default-best-practice-policy will:
Inspect all traffic against Ingress rules (paths)/routes and learn them.
Detect suspicious requests in confidence high or critical.
If set to prevent-learn, send an HTTP Error Code 403 Forbidden to the client that sent the bad request.
Log to stdout (so you can use fluentd/fluentbit) to send logs to ELK or another collector.
Step 3: Apply Configuration
The installation tool will list commands to run to complete the installation and apply the configuration. The configuration resides in three files:
open-appsec helm chart for NGINX Ingress Controller or Kong (CRDs and other necessary files)
ingress.yaml - manifest created by the installer per your selections in Step 1
open-appsec-policy.yaml - manifest created by the installer per your selections in Step 2
You can run the commands now or later. If you run them, congratulations - open-appsec is installed and working!
You can run the commands now or later. If you run them, congratulations - open-appsec is installed and working! Post-Install
Point your DNS to the Duplicated Ingress (skip if you chose the existing Ingress in Step 1 above)
After testing that your services are reachable, you can point your DNS to the new Ingress.
In case of a problem, at any time, you can either switch open-appsec off while running the same Ingress code or change your DNS back.
You can identify the IP address of the new Ingress by running the following:
kubectl get ing -A
Configuration Changes
You can conduct policy changes, define exceptions and other advanced configurations in one of the following three ways:
By running the interactive configuration tool: open-appsec-cli
By using open-appsec K8S custom resources
Usind WebUI
Note: For Production usage, you might want to switch from using the Basic to the more accurate Advanced Machine Learning model.
Conclusion
The importance of a web application firewall in Kubernetes can not be overemphasized, and while all your concerns about using a WAF in Kubernetes are valid, choosing a preemptive WAF like open-appsec could solve your application problems.
Try open-appsec in the Playground today.
FAQ
What's the difference between WAF and IPS?
A WAF's role is to provide network security against web-based attacks aimed at the web application server. In contrast, an IPS protects the entire or segmented network in front of it. This includes all types of malicious attacks, not just web-based ones. Due to this, an IPS is more expensive than a WAF.
How can I tell if a site is using WAF?
Here are three ways to determine if a website uses a WAF:
Check the HTTP response headers: Look for "X-Web-Application-Firewall" or "X-ASPNET-Firewall." Note: This method is not universally reliable. Different WAF implementations may use varying header names, or some WAFs may not include such headers at all.
Conduct a vulnerability scan: These scanners can detect server response patterns indicative of WAFs.
Try bypassing the WAF: Attempt common web application attacks like SQLi, cross-site scripting (XSS), or directory traversal. If a WAF protects the website, you may receive an error message indicating the attack was blocked.
What does a WAF not protect against?
While a WAF is a powerful security tool for protecting web applications from attacks using layer 7 of the OSI Model, it may not protect against certain types of attacks. Some of them include the following:
Insider Threats
Network-Based Attacks
Social Engineering Attacks