Deploy service and load balance with Nginx Ingress Controller


This tutorial assumes about:

– Create EKS cluster with 3 nodes in 2 regions with Terraform
– Create Statefulset and Headless Postgresql service in EKS
– Create 2 Nginx servers in EKS and connect to Postgresql
– Create Nginx Ingress Controller as a Load Balancer to both Nginx servers

Noted: You can find sources code here:

1. Create EKS cluster with 3 nodes in 2 regions with Terraform

I will ignore some basic steps like how to install AWS CLI, Terraform, AWS configure,…

1.1. Create Workspace

– Create a folder

$ mkdir <folder’s name> && cd $_
$ mkdir terraform-practice02 && cd $_
$ ~/terraform-practice02$

1.2. Create tf file for deploy EKS and some dependency provisions a VPC, subnets and availability zones using the AWS VPC Module. A new VPC is created for this tutorial so it doesn’t impact your existing cloud environment and resources. provisions the security groups used by the EKS cluster. provisions all the resources (AutoScaling Groups, etc…) required to set up an EKS cluster using the AWS EKS Module. defines the output configuration. sets the Terraform version to at least 0.14. It also sets versions for the providers used in this sample.

1.3. Initialize Terraform workspace

Once you have cloned the repository, initialize your Terraform workspace, which will download and configure the providers.
$ terraform init

1.4. Provision the EKS cluster

In your initialized directory, run terraform apply and review the planned actions. Your terminal output should indicate the plan is running and what resources will be created.
$ terraform apply –auto-approve

This terraform apply will provision a total of 54 resources (VPC, Security Groups, AutoScaling Groups, EKS Cluster, etc…)

This process should take approximately 10 minutes. Upon successful application, your terminal prints the outputs defined in

1.5. Configure kubectl

Now that you’ve provisioned your EKS cluster, you need to configure kubectl
Run the following command to retrieve the access credentials for your cluster and automatically configure kubectl
$ aws eks –region $(terraform output -raw region) update-kubeconfig –name $(terraform output -raw cluster_name)
Added new context arn:aws:eks:ap-southeast-1:822594955320:cluster/nhutpm-eks-rILif to /home/nhutpm/.kube/config

The Kubernetes cluster name and region correspond to the output variables showed after the successful Terraform run.

1.6. (Optional) Deploy and access Kubernetes Dashboard

If you want to verify your cluster is configured correctly and running via UI, you will deploy the Kubernetes dashboard and navigate to it in your local browser.
While you can deploy the Kubernetes metrics server and dashboard using Terraform, kubectl is used in this tutorial so you don’t need to configure your Terraform Kubernetes Provider.

– Deploy Kubernetes Metrics Server

The Kubernetes Metrics Server, used to gather metrics such as cluster CPU and memory usage over time, is not deployed by default in EKS clusters
Download and unzip the metrics server by running the following command
$ wget -O v0.3.6.tar.gz && tar -xzf v0.3.6.tar.gz

Deploy the metrics server to the cluster by running the following command.
$ kubectl apply -f metrics-server-0.3.6/deploy/1.8+/

Verify that the metrics server has been deployed. If successful, you should see something like this.
$ kubectl get deployment metrics-server -n kube-system

– Deploy Kubernetes Dashboard

The following command will schedule the resources necessary for the dashboard
$ kubectl apply -f

Now, create a proxy server that will allow you to navigate to the dashboard from the browser on your local machine. This will continue running until you stop the process by pressing ctrl + C
$ kubectl proxy

You should be able to access the Kubernetes dashboard here
( ).

– Authenticate the dashboard

To use the Kubernetes dashboard, you need to create a ClusterRoleBinding and provide an authorization token. This gives the Cluster-admin permission to access the Kubernetes-dashboard. Authenticating using kubeconfig is not an option

In another terminal (do not close the kubectl proxy process), create the ClusterRoleBinding resource
$ kubectl apply -f

Then, generate the authorization token
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep service-controller-token | awk ‘{print $1}’)

Select “Token” on the Dashboard UI then copy and paste the entire token you receive into the dashboard authentication screen to sign in. You are now signed in to the dashboard for your Kubernetes cluster.

Navigate to the “Cluster” page by clicking on “Cluster” in the left navigation bar. You should see a list of nodes in your cluster.

2. Create Statefulset and Headless Postgresql service in EKS

2.1. Create a secret for Postgresql and Nginx

$ kubectl create secret generic <name-of-secret> –from-literal==<name-of-password>=<password>
kubectl create secret generic postgresql-secrets –from-literal=password=admin123
secret/postgresql-secrets created

2.2. Create statefulset and headless Postgresql service

$ kubectl apply -f postgres-headless-statefulset.yaml
service/postgresql-nginx created
statefulset.apps/postgresql-nginx created

2.3. Create 2 Nginx server link to Postgresql database

$ kubectl apply -f postgres-nginx.yaml
service/nginx01 created
deployment.apps/nginx01 created

$ kubectl apply -f postgres-nginx02.yaml
service/nginx02 created
deployment.apps/nginx02 created

– Verify:
$ kubectl get svc

2.4. Create Nginx Ingress Controller

$ kubectl apply -f nginx-ingress-controller.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created created created created created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created created created created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created created created
serviceaccount/ingress-nginx-admission created

– Verify:
$ kubectl get svc –namespace=ingress-nginx

2.5. Create ingress from Nginx Ingress Controller to both Nginx web servers

$ kubectl apply -f ingress.yml
Output: created

2.6. Access to External-IP of step 2.4

The End!