Setting Up an EKS Cluster with Terraform

AWS EKS Image

Introduction to AWS EKS

AWS Elastic Kubernetes Service (EKS) is a managed service that simplifies Kubernetes cluster management. It automates tasks like scaling, patching, and upgrades, allowing developers to focus on their applications. In this guide, we’ll use Terraform modules to efficiently create an EKS cluster.

Why Use Terraform for EKS Cluster Creation?

Terraform allows us to manage infrastructure as code (IaC), ensuring consistency, scalability, and automation. It simplifies complex tasks such as provisioning EKS clusters, configuring VPCs, and managing node groups. Here are some benefits of using Terraform:

  • Repeatable infrastructure setup across environments.
  • Seamless scaling and cost optimization.
  • Collaborative human-readable code for infrastructure management.

Prerequisites for EKS Setup

Before setting up an EKS cluster, you need the following software installed:

Step-by-Step Guide to Create an EKS Cluster with Terraform

1. Install Terraform

Download and install Terraform by visiting the Terraform website. Once installed, check the version by running:

terraform -v

2. Install AWS CLI

Install the AWS CLI following the official guide. After installation, configure it with your AWS credentials:

aws configure

3. Install kubectl

kubectl is essential for managing Kubernetes clusters. Ensure you install the right version:

kubectl version --client

4. Create the AWS VPC Using Terraform

Next, define the AWS Virtual Private Cloud (VPC) using the Terraform VPC module:


module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.0.0"
  name = "my-vpc"
  cidr = "10.0.0.0/16"
  azs  = ["us-west-2a", "us-west-2b"]
  public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
  private_subnets = ["10.0.3.0/24", "10.0.4.0/24"]
}
            

5. Create the EKS Cluster Using Terraform

After provisioning the VPC, use the Terraform EKS module to create the cluster:


module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  cluster_name = "my-cluster"
  cluster_version = "1.21"
  subnets = module.vpc.private_subnets
  vpc_id  = module.vpc.vpc_id
  enable_irsa = true
  manage_aws_auth = true
}
            

6. Deploy Managed Node Groups

Provision the EKS managed node groups using the Terraform configuration below:


module "node_groups" {
  source = "terraform-aws-modules/eks/aws//modules/node_groups"
  cluster_name = module.eks.cluster_id
  node_groups = {
    worker_group = {
      desired_capacity = 3
      max_capacity     = 5
      min_capacity     = 1
      instance_type    = "t3.medium"
    }
  }
}
            

Connecting to EKS with kubectl

1. Update kubeconfig

After the cluster is created, update the kubeconfig file to connect with kubectl:

aws eks --region  update-kubeconfig --name 

2. Verify Cluster Nodes

Ensure your nodes are up and running:

kubectl get nodes

Deploying Applications on EKS

To demonstrate the deployment process, we’ll deploy a sample Nginx app on the EKS cluster.

1. Nginx Deployment YAML

Create the following Kubernetes manifest for Nginx:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
            

2. Expose the Deployment

Create a LoadBalancer service to expose Nginx to the internet:


apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
            

Apply the configurations:

kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml

Scaling the EKS Cluster

To scale the EKS cluster dynamically, use Cluster Autoscaler. Install it via Helm:

Leave a Reply

Your email address will not be published. Required fields are marked *