Kong for your microservices


The Kong API Gateway [1] has been around for some time and it has been evolving from an APIs Management solution to an industry-standard tool for the cloud-native environments. People are choosing Kong because it is super easy to set up and operate, as well as its capability to scale and extend with plugins. I have been a Kong user and administrator for a while as a DevOps engineer, and I think I should write down my experience with it before too many new things take over my head. This article will focus on a deployment strategy for Kong in a microservice architecture [2].

CONTENTS

  • Introduction
  • Kong's architecture
  • Kong for microservices
  • Common issues & what we can improve

INTRODUCTION

A quick introduction of Kong and Microservice Architecture could be a great way to get started.

Kong

We can summarize Kong as follows.
  • Open-source cloud-native, fast, scalable, and distributed Microservice Abstraction Layer. It means that Kong can be deployed in containerized environments (e.g., Docker Swarm, AWS ECS, or Kubernetes, etc.) and delivers the microservice APIs in a scalable and extensible way so that the services do not have to expose themselves to the world.
  • Backed by the battle-tested NGINX [3] with a focus on high performance. Kong was built upon the NGINX and provides a high-level programming interface to extend its capabilities.
  • Being actively developed and maintained [4]. The Kong project has about 100 developers maintaining it with about 20 active contributors.

Microservice architecture

In [2], James Lewis and Martin Fowler had put together such a precise and comprehensive definition of the microservice architecture. In my humble opinion (IMHO), the excerpt below is essential to understand the term.

"In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

A monolithic application built as a single unit. Enterprise Applications are often built in three main parts: a client-side user interface (consisting of HTML pages and javascript running in a browser on the user's machine) a database (consisting of many tables inserted into a common, and usually relational, database management system), and a server-side application. The server-side application will handle HTTP requests, execute domain logic, retrieve and update data from the database, and select and populate HTML views to be sent to the browser. This server-side application is a monolith - a single logical executable[2]. Any changes to the system involve building and deploying a new version of the server-side application."

Figure 1: Monoliths and Microservices (src: [2])

KONG'S ARCHITECTURE


Figure 2: Kong's architecture (src: [1])

It is not quite complicated as I initially thought. As you can see in figure 2, Kong's components are:
  • At its core, Kong was built upon the well-known high-performance reverse proxy & HTTP engine, the Nginx.
  • OpenResty [5] was used on top of Nginx to extend its capability as well as providing hooks for APIs request and response lifecycle as mentioned in Figure 2.
  • Kong uses Cassandra [10] or PostgreSQL [11] as its datastore for routes, upstream, targets, etc. as well as plugin's schemas.
  • You can build your own Kong plugins to manipulate requests and responses using Lua programming language [6] following Kong's structure as shown in [7].
  • Finally, like many other modern applications, Kong offers a collection of Restful APIs to operate it (jobs like adding new routes, new targets, etc.).

KONG FOR MICROSERVICES

One of the most important features of Kong, IMHO, is that you can deploy multiple instances of Kong and they all share the same database. That kind of clustering architecture is a perfect match for the cloud-native environment where any services can be scaled up and down dynamically based on the number of user requests. So, with Kong in place, your API services can serve different sizes of customers from a couple of API requests to million of them per minute (like in an API marketplace in which Kong was born). In this article, I present a deployment strategy of Kong for your microservices on the AWS ECS [12] cloud environment. Figure 3 illustrates a scenario in which Kong orchestrates the API requests and responses to and from the services in a microservice architecture.

Figure 3: Kong deployed in a microservice architecture
As shown in figure 3, multiple Kong instances are using the same database cluster. The workflow could be easily explained as follows:
  1. When each service instance (each service can have multiple instances) (*) was first deployed into the cluster, they have to register the following with Kong using Kong's Admin APIs (**):
    • themselves as the target host,
    • the API endpoints they are serving (the routes),
    • the upstream which stands for each deployment of a service (this is great for A/B testing and enables rollback capability in a microservice architecture),
    • and the service which is an abstract object representing the service in Kong's point of view.
  2. The users request access to API endpoints (e.g., /api/v1/svc1/some-resource)
  3. The load balancer distributes incoming traffic across multiple Kong instances (***)
  4. Each one of the Kong instances distributes incoming traffic across application instances (instances of the services) that are healthy (see [17] for Kong's health checks and circuit breaker mechanism).
  5. The service instances validate the requests and responses back to Kong
  6. Kong then forwards to the responses to the load balancer to send to the users.
(*) I use AWS ECS as a platform to deploy my microservices, so the services here are the ECS services. Each ECS service is a collection of tasks, and each task is a container instance that is running the application.
(**) This process could be embedded in the application or handled by the deployment process.
(***) We can implement request validations at the load balancer if needed.

Okay, enough saying, let's get your hand dirty with the deployment process. This process assumes that you already deployed an AWS ECS cluster for your microservices with all the needed VPC and subnets information. Your task is to deploy Kong into the same ECS cluster with your microservices.

BUILD KONG IMAGE

You can use the official Kong images [18] or if you want to install something in the Kong image (e.g., your custom plugin, some packages, etc.) you can build it from a Dockerfile and then push it on to the AWS ECR repository:

Dockerfile:

FROM kong:latest
RUN apk add --no-cache curl \
        && luarocks install your-custom-kong-plugin
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

docker-entrypoint.sh: see [19] for example.

1. Run the build command

$ docker build --no-cache -t <your AWS ECR URL>/my-kong-img:latest -f Dockerfile .

2. Log in to the AWS ECR repository

$ aws ecr get-login --no-include-email

3. Push the image on to the AWS ECR repository

$ docker push <your AWS ECR URL>/my-kong-img:latest

PREPARE A DATABASE FOR KONG

You can manually create an AWS Aurora RDS PostgreSQL cluster and a database for Kong in the AWS console or you can use Terraform [20] to do that. I will not cover this part in this article.

DEPLOY KONG ON THE AWS CLOUD

We gonna use Terraform to deploy Kong into the same AWS ECS cluster as mentioned before. And we will use the Terraform scripts I wrote in[13] to accomplish the task. Please carefully look through the scripts before proceeding with the deployment. The Terraform scripts will provision these resources for you:
  • An AWS ECS cluster
  • AWS ECS tasks for Kong
  • AWS autoscaling groups for Kong ECS tasks
  • AWS service discovery for Kong
  • AWS security groups for accessing Kong
  • An AWS network load balancer for Kong
  • An AWS application load balancer for Kong
1. Prerequisites
  • Prepare an AWS IAM account that has permission to create/update the resource you want to deploy
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
$ export AWS_DEFAULT_REGION="us-east-1"
  • Install Terraform: read [21]
  • Create an S3 bucket to store Terraform's states
  • Prepare these resources:
    • app_image: Kong's docker image URL
    • public_subnet_ids: Public subnet IDs
    • private_subnet_ids: Private subnet IDS
    • vpc_id: the VPC's ID in which you will provision Kong
    • cidr_block: VPC's CIDR
    • task_role_arn: The IAM role that tasks can use to make API requests to authorized AWS services
    • execution_role_arn: This IAM role is required by Fargate tasks to pull container images and publish container logs to Amazon CloudWatch on your behalf
    • aws_certificate_arn: AWS certificate's ARN
    • aws_acc_id: your AWS account's ID
    • a subdomain for Kong
2. Provision Kong

$ git clone https://github.com/dangtrinhnt/useful-terraform-modules
$ cd useful-terraform-modules/modules/kong
$ terraform init
$ terraform plan -var="public_subnet_ids=<your public subnet ids>" \
                 -var="private_subnet_ids=<your private subnet ids>" \
                 -var="vpc_id=<your vpc id>" \
                 -var="cidr_block=<your vpc cidr>" \
                 -var="app_image=<your AWS ECR URL>/my-kong-img:latest" \
                 -var="task_role_arn=<the task role arn>" \
                 -var="execution_role_arn=<the execution role arn>" \
                 -var="aws_certificate_arn=<your aws ssl cert arn>" \
                 -var="aws_acc_id=<your aws acc id>" \
                 -var="domain_name=<your aws hosted domain>"
$ terraform apply -var="public_subnet_ids=<your public subnet ids>" \
                 -var="private_subnet_ids=<your private subnet ids>" \
                 -var="vpc_id=<your vpc id>" \
                 -var="cidr_block=<your vpc cidr>" \
                 -var="app_image=<your AWS ECR URL>/my-kong-img:latest" \
                 -var="task_role_arn=<the task role arn>" \
                 -var="execution_role_arn=<the execution role arn>" \
                 -var="aws_certificate_arn=<your aws ssl cert arn>" \
                 -var="aws_acc_id=<your aws acc id>" \
                 -var="domain_name=<your aws hosted domain>"

And after a couple of minutes, your Kong instances will be up and running and are ready to serve your APIs.

COMMON ISSUES & WHAT WE CAN IMPROVE

Some times, it's hard to troubleshoot the issues of the APIs. For example, when a "failure to get a peer from the ring-balancer" issue (an example could be as [14]) occurs, I always have to redeploy the API services to make it work. At first, it was not clear if it was the problem of the API services or It was Kong that had trouble handling multiple targets of the API service. It turned out that there was a problem with the Kong cluster's connection that could not communicate with the targets properly. I tuned Kong's configuration a little bit, redeployed Kong cluster and it worked.


Secondly, Kong only handles external requests and responses to and from the API servers, not the inter-service communication. So, we have to implement some logic for the services to handle inter-service communication like rate-limiting or circuits breaker even though those features can be handed over to Kong. This is reasonable because Kong was built solely as an API Gateway, not a service mesh solution.
However, most microservices implementations look more like meshes over time due to the increasing need for new features that require more complex inter-service communication. Recently, the company behinds Kong API Gateway just released a new service mesh solution called Kuma [8] based on Envoy proxy [9]. But it is quite new and needs more time for testing and industry adaptation before we can 100% trust it. IMHO, we should use a more mature product such as Istio [15] or Linkerd [16]. Also, Kong needs to be updated to work smoothly with those service mesh solutions since there are some overlapping features (e.g., end-user authentication, etc.)


Okay, that is my 5 cents based mostly on my experience working with Kong. Please let me know if you find any errors or you want to propose alternative approaches for deploying Kong in a microservice architecture in the comment section below.

References:

[1] https://konghq.com/
[2] https://martinfowler.com/articles/microservices.html
[3] https://www.nginx.com/
[4] https://github.com/Kong/kong
[5] https://openresty.org/en/
[6] https://www.lua.org/
[7] https://docs.konghq.com/2.0.x/plugin-development/
[8] https://kuma.io/
[9] https://www.envoyproxy.io/
[10] https://cassandra.apache.org/
[11] https://www.postgresql.org/
[12] https://aws.amazon.com/ecs/
[13] https://github.com/dangtrinhnt/useful-terraform-modules/tree/master/modules/kong
[14] https://github.com/Kong/kong/issues/4778
[15] https://istio.io/
[16] https://linkerd.io/
[17] https://docs.konghq.com/2.0.x/health-checks-circuit-breakers/
[18] https://hub.docker.com/_/kong
[19] https://github.com/Kong/docker-kong/blob/master/alpine/docker-entrypoint.sh
[20] https://www.terraform.io/docs/providers/postgresql/r/postgresql_database.html
[21] https://learn.hashicorp.com/terraform/getting-started/install.html#install-terraform


Comments