How to Create and Manage AWS EKS Cluster with AWS CICD Pipeline
Introduction
In modern software development, automating the release process is crucial for delivering applications efficiently and reliably. AWS CodePipeline, integrated with AWS DevTools, offers a seamless way to manage CI/CD pipelines tailored for Kubernetes workloads running on Amazon EKS (Elastic Kubernetes Service).
This guide walks you through the complete lifecycle of an AWS CodePipeline setup for an EKS cluster from configuring the source stage with CodeCommit, building and testing using CodeBuild, deploying with Kubernetes manifests, and monitoring applications using CloudWatch Container Insights. With a focus on automation, scalability, and security, you’ll learn how to streamline the deployment process for containerized applications.
Prior reading:
- CI-CD on AWS — Part 1: Introduction to CI/CD
2. CI-CD on AWS — Part 2: AWS CodeCommit
3. CI-CD on AWS — Part 3: AWS CodeBuild
4. How to Build a Complete CI/CD Pipeline using AWS DevTools
5. How to Automate Terraform Deployments with AWS CodePipeline
6. How to Build AWS Cross Account CI-CD Deployment using AWS Developer Tools
Stages in Release Process
In the source stage, we check the source code, review the new code, and pull request processing. In the Build stage, we compile code and build artifacts like WAR files, JAR files, Container images or even Kubernets manifest files and Unit test them, and then move into the test phase. So in the test phase, we will do the integration testing with other systems in the test phase and load testing, UI testing, security testing, and then test environments like dev, QA, and staging environments. So all come under this test phase. Then we will have something called the production phase. So deploy to production environments and then monitor code in production to detect the errors quickly.
AWS DevTools for Pipeline
AWS something called CodePipeline which will be in integration with the different AWS tools. For the source code, it's going to use the AWS CodeCommit which is something like a GitHub repository like GitHub, BitBucket, etc. For Buildings we usually see jenkings, but in AWS we can use CodeBuild and this tool allows us to build our artifacts and then push you to S3 buckets in AWS. For Testing purposes, you can use CodeBuild plus 3rd party tools to do our testing-related things. For the deployment stage, we are deploying Kubernetes with code build in combination with Kubectl to do deployment in our use case. The final stage is Monitoring. From a monitoring perspective, you will have Cloudwatch Container Insights. Using that we can do end-to-end monitoring of our container applications deployed on the Kubernetes cluster and also Kubernetes cluster-related components.
Pre-requisite check
- We are going to deploy an application that will also have an ALB Ingress Service and will register its DNS name in Route53 using External DNS.
- This means we should have both related pods running in our cluster
# Verify alb-ingress-controller pod running in namespace kube-system
kubectl get pods -n kube-system
# Verify external-dns pod running in default namespace
kubectl get pods
Create an ECR Repository for our Application Docker Images
Steps:
- Go to Services -> Elastic Container Registry -> Create Repository
- Name: eks-devops
- Tag Immutability: Enable
- Scan On Push: Enable
- Click on Create Repository
Create a CodeCommit repository and Generate Git Credentials
- Create git credentials from IAM Service (HTTPS Credentials)
- Clone the git repository from Code Commit to the local repository, during the process provide your git credentials generated to login to the git repo
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/eks-devops
You need to input the below files in a newly created git repo.
buildspec.yml
version: 0.2
phases:
install:
commands:
- echo "Install Phase - Nothing to do using latest Amazon Linux Docker Image for CodeBuild which has all AWS Tools - https://github.com/aws/aws-codebuild-docker-images/blob/master/al2/x86_64/standard/3.0/Dockerfile"
pre_build:
commands:
# Docker Image Tag with Date Time & Code Buiild Resolved Source Version
- TAG="$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
# Update Image tag in our Kubernetes Deployment Manifest
- echo "Update Image tag in kube-manifest..."
- sed -i 's@CONTAINER_IMAGE@'"$REPOSITORY_URI:$TAG"'@' kube-manifests/01-DEVOPS-Nginx-Deployment.yml
# Verify AWS CLI Version
- echo "Verify AWS CLI Version..."
- aws --version
# Login to ECR Registry for docker to push the image to ECR Repository
- echo "Login in to Amazon ECR..."
- $(aws ecr get-login --no-include-email)
# Update Kube config Home Directory
- export KUBECONFIG=$HOME/.kube/config
build:
commands:
# Build Docker Image
- echo "Build started on `date`"
- echo "Building the Docker image..."
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
# Push Docker Image to ECR Repository
- echo "Build completed on `date`"
- echo "Pushing the Docker image to ECR Repository"
- docker push $REPOSITORY_URI:$TAG
- echo "Docker Image Push to ECR Completed - $REPOSITORY_URI:$TAG"
# Extracting AWS Credential Information using STS Assume Role for kubectl
- echo "Setting Environment Variables related to AWS CLI for Kube Config Setup"
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
# Setup kubectl with our EKS Cluster
- echo "Update Kube Config"
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
# Apply changes to our Application using kubectl
- echo "Apply changes to kube manifests"
- kubectl apply -f kube-manifests/
- echo "Completed applying changes to Kubernetes Objects"
# Create Artifacts which we can use if we want to continue our pipeline for other stages
- printf '[{"name":"01-DEVOPS-Nginx-Deployment.yml","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
# Additional Commands to view your credentials
echo "Credentials Value is.. ${CREDENTIALS}"
echo "AWS_ACCESS_KEY_ID... ${AWS_ACCESS_KEY_ID}"
echo "AWS_SECRET_ACCESS_KEY... ${AWS_SECRET_ACCESS_KEY}"
echo "AWS_SESSION_TOKEN... ${AWS_SESSION_TOKEN}"
echo "AWS_EXPIRATION... $AWS_EXPIRATION"
echo "EKS_CLUSTER_NAME... $EKS_CLUSTER_NAME"
artifacts:
files:
- build.json
- kube-manifests/*
index.html
<!DOCTYPE html>
<html>
<body style="background-color:rgb(210, 250, 220);">
<h1>Welcome to AWS DevTools with Achintha Bandaranaike - V1 </h1>
<p>Application Name: App1</p>
</body>
</html>
Docker File:
FROM nginx
COPY app1 /usr/share/nginx/html/app1
kube-manifest files:
nginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: eks-devops-deployment
labels:
app: eks-devops
spec:
replicas: 2
selector:
matchLabels:
app: eks-devops
template:
metadata:
labels:
app: eks-devops
spec:
containers:
- name: eks-devops
image: CONTAINER_IMAGE
ports:
- containerPort: 80
nginx-nodeportservice.yml
apiVersion: v1
kind: Service
metadata:
name: eks-devops-nodeport-service
labels:
app: eks-devops
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
type: NodePort
selector:
app: eks-devops
ports:
- port: 80
targetPort: 80
nginx-alb-ingressservice.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eks-devops-ingress-service
labels:
app: eks-devops
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
## SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: <ARN>
#alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
# SSL Redirect Setting
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# External DNS - For creating a Record Set in Route53
external-dns.alpha.kubernetes.io/hostname: cloudylk.com
spec:
ingressClassName: my-aws-ingress-class # Ingress Class
rules:
- http:
paths:
- path: /* # SSL Redirect Setting
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: eks-devops-nodeport-service
servicePort: 80
File Hierarchy:
Create STS Assume Role for CodeBuild to Interact with EKS Cluster
CodeBuild is a service that will be doing our build docker image build and also doing the respective deployment to our EKS Cluster. So to do that we need some additional information.
In AWS Codepipeline we are using codebuild to deploy changes to our Kubernetes manifests. So this requires an AWS IAM role capable of interacting with the EKS cluster.
In this step, we are going to create an IAM role and add an inline policy EKS:Describe
that we will use in the CodeBuild stage to interact with the EKS cluster via Kubectl.
# Export your Account ID
export ACCOUNT_ID= <AWS_AccountID>
# Set Trust Policy
TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }"
# Verify inside Trust policy, your account id got replacd
echo $TRUST
# Create IAM Role for CodeBuild to Interact with EKS
aws iam create-role --role-name EksCodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'
# Define Inline Policy with eks Describe permission in a file iam-eks-describe-policy
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/iam-eks-describe-policy
# Associate Inline Policy to our newly created IAM Role
aws iam put-role-policy --role-name EksCodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-eks-describe-policy
Verify in the AWS Console.
Update EKS Cluster aws-auth ConfigMap with the new role created in the previous step
In the EKS Cluster, we have something called config map named AWS auth. So inside that whatever the role we have created we need to update it inside the EKS Cluster.
Verify what is present in the aws-auth config map before the change.
kubectl get configmap aws-auth -o yaml -n kube-system
Set role value
ROLE=" - rolearn: arn:aws:iam::$ACCOUNT_ID:role/EksCodeBuildKubectlRole\n username: build\n groups:\n - system:masters"
Get current aws-auth configMap data and attach new role info to it
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml
Patch the aws-auth config map with a new role
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
Verify what is updated in the aws-auth config map after the change
kubectl get configmap aws-auth -o yaml -n kube-system
Create Pipeline:
Steps:
- Go to Services -> CodePipeline -> Create Pipeline
- The source provider is Codecommit
- Build Staage you can select Codebuild.
- Create new project
Put the below settings in the Environment section
Environment Image: Managed Image
Operating System: Amazon Linux 2
Runtimes: Standard
Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0
Image Version: Always use the latest version for this runtime
Environment Type: Linux
Privileged: Enable
Role Name: Select AWS given role
Additional Configurations: Keep Default
Add Environment Variables
REPOSITORY_URI = <Your Repository URI>
EKS_KUBECTL_ROLE_ARN = arn:aws:I am::<account_ID>:role/EksCodeBuildKubectlRole
EKS_CLUSTER_NAME = <eks cluster name>
Select use a build spec file:
Logs:
- Group Name: eks-deveops-cicd-logs
Click on Continue to CodePipeline
Click on Skip Deploy Stage
Review and click on Create Pipeline
ECR Access to CodeBuild IAM Role:
CodeBuild IAM role doesn’t have access to ECR.
Go to the code build IAM role and attach below IAM policy.
Policy Name: AmazonEC2ContainerRegistryFullAccess
Update CodeBuild Role to have access to the STS Assume Role we have created using the STS Assume Role Policy
- Build should fail due to CodeBuild not have access to perform updates in the EKS Cluster.
- It even cannot assume the STS Assume role whatever we created.
- Create STS Assume Policy and Associate that to CodeBuild Role
Create STS Assume Role Policy
- Go to Services IAM -> Policies -> Create Policy
- In the Visual Editor Tab
- Service: STS
- Actions: Under Write — Select
AssumeRole
- Resources: Specific
- Add ARN
- Specify ARN for Role: arn:was: I am::<Account_ID>: role/EksCodeBuildKubectlRole
- Click Add
- Click on Review Policy
- Name: eks-codebuild-sts-assume-role
- Description: CodeBuild to interact with the EKS cluster to perform changes
- Click on Create Policy
Associate Policy to CodeBuild Role
Now Trigger the Pipeline:
Conclusion
Managing an AWS EKS cluster using AWS CodePipeline simplifies and streamlines the CI/CD process for Kubernetes workloads. By leveraging AWS DevTools like CodeCommit, CodeBuild, and CloudWatch, you can achieve an automated, reliable, and scalable pipeline for deploying containerized applications. This approach ensures continuous integration, thorough testing, and efficient deployment, reducing the time and effort required for manual interventions. With proper configuration and monitoring, you can maintain secure, and high-performing applications in production environments.
Thanks for reading! Let’s see you in the next article. Don’t forget to follow me via medium and leave a 👏 And Stay connected on LinkedIn :
https://www.linkedin.com/in/achintha-bandaranaike-676a82163/