Kong for your microservices

The Kong API Gateway [1] has been around for some time and it has been evolving from an APIs Management solution to an industry-standard tool for the cloud-native environments. People are choosing Kong because it is super easy to set up and operate, as well as its capability to scale and extend with plugins. I have been a Kong user and administrator for a while as a DevOps engineer, and I think I should write down my experience with it before too many new things take over my head. This article will focus on a deployment strategy for Kong in a microservice architecture [2].
CONTENTSIntroductionKong's architectureKong for microservicesCommon issues & what we can improve INTRODUCTION A quick introduction of Kong and Microservice Architecture could be a great way to get started. Kong We can summarize Kong as follows.
Open-source cloud-native, fast, scalable, and distributed Microservice Abstraction Layer. It means that Kong can be deployed in containerized environments (e.g., Docke…

Delete all Kong targets using bash

Some times I just want to delete all the Kong targets and redeploy all the APIs to troubleshoot some issues. So, I wrote the below bash script:


Scale multiple ECS services at once

You use the following bash script I wrote to scale multiple ECS services at once:

- AWS CLI [1]
- An IAM account that has permission to update or scale ECS services
- A text file that contains all the ECS service names, each line contains 1 service name. For example:


./ <cluster name> <path to the ECS service names text file> <desired count, e.g., 0>



Create a sock proxy to a private network

Last week, I wrote a bash script that can be used to create a sock proxy that connects my computer and a private network via a bastion server (the bastion server is a server that sitting inside a private network that I can ssh into using a pem key).


./ /path/to/sshkey.pem <bastion_username> <bastion_address>

It will output the sock proxy address. For example: socks5://localhost:13000

Add MetalLB to MicroK8S

There is a question that pops up inside my head every time I work with Kubernetes [1], "Why the hell does it not implement a network load balancer?". It did have network load balancers but tied to public cloud providers (e.g., AWS, GCP, etc.). What if I want to run Kubernetes clusters in my private clouds or even in my bare-metal infrastructures? Fortunately, I found MetalLB [2].

"MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers." ~MetalLB documentations.

So, whenever I spin up a new Kubernetes cluster in my bare-metal infrastructures (or for my MicroK8S [4] clusters), I normally have to deploy MetalLB and with Layer 2 configuration as depicted in figure 1. There are also other configurations such as BGP, automatic IP assign…

To get a random available port in your *nix-based machine

For example, If I want to get a random port from the 3000-3999 port pool, I would run the following command in the terminal:

comm -23 <(seq 3000 3999 | sort) <(ss -tan | awk '{print $4}' | cut -d':' -f2 | sort -u) | shuf | head -n 1

Manipulate your yaml file with yq

I like to play with bash shell especially when I have to manipulate template files of some sort on the go (dynamically). yq is a great tool I just figured out that can help me to generate SAM [1] template.yaml file based on some business logic. The great thing about yq is that I don't have to install it to be able to run it with the help of docker. So, add this to my bash script

yq(){ docker run --rm -i -v ${PWD}:/workdir mikefarah/yq yq $@ }
and than I can use yq as if I installed it, for example: