Create EKS with auto-provisioning nodes effortless using Karpenter and Eksclt
With the new Eksctl and Karpenter versions, we can create EKS clusters with nodes auto-provisioning quickly and effortlessly. Furthermore, we no longer need to create extra resources on AWS manually; Eksctl will do that for us. This way allows us to create and replicate clusters creation with the possibility of scaling our nodes on-demand and reducing the data plane cost.
Note: This post is an updated and reduced version of an old blog post if you want to know more about Karpenter you should check it https://nahuelhernandez.com/blog/karpenter_kubernetes_node_autoscaling/
Requirements:
- AWS Account
- Eksctl >= 0.99
- AWS cli >= 2.6
- Kubectl >= 1.23
Configuring cluster variables:
> export CLUSTER_NAME=eks-with-karpenter
> export VERSION=1.22
> export REGION=us-east-1
Creating the cluster using Eksctl:
> cat <<EOF | eksctl create cluster -f -
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: $CLUSTERNAME
region: $REGION
version: $VERSION
tags:
karpenter.sh/discovery: $CLUSTERNAME
iam:
withOIDC: true # required
karpenter:
version: '0.9.0'
managedNodeGroups:
- instanceType: t3.small
name: managed-ng-1
minSize: 1
maxSize: 3
desiredCapacity: 1
EOF
This task will create the EKS cluster with a managedNodeGroup and the Karpenter resources on Kubernetes. It will take approximately 20 minutes.
Check the Karpenter deployed resources on K8S:
> kubectl get all -n karpenter
NAME READY STATUS RESTARTS AGE
pod/karpenter-768fc86b78-2zmv6 2/2 Running 0 39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/karpenter ClusterIP 10.100.204.73 <none> 8080/TCP,443/TCP 39s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/karpenter 1/1 1 1 39s
NAME DESIRED CURRENT READY AGE
replicaset.apps/karpenter-768fc86b78 1 1 1 39s
Check the nodes:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-14-42.ec2.internal Ready <none> 7m18s v1.22.6-eks-7d68063
Create the Karpenter provisioner:
> cat <<EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
limits:
resources:
cpu: 1000
provider:
subnetSelector:
karpenter.sh/discovery: ${CLUSTER_NAME}
securityGroupSelector:
karpenter.sh/discovery: ${CLUSTER_NAME}
instanceProfile: eksctl-KarpenterNodeInstanceProfile-${CLUSTER_NAME}
ttlSecondsAfterEmpty: 30
EOF
Testing Node Autoscaling (optional)
Create a deployment:
> cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 3
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
resources:
requests:
cpu: 1
EOF
Check the pods and the nodes
> kubectl get pod
NAME READY STATUS RESTARTS AGE
pod/inflate-6b88c9fb68-kvr78 0/1 Pending 0 17s
pod/inflate-6b88c9fb68-qckpd 0/1 Pending 0 17s
pod/inflate-6b88c9fb68-wk4wk 0/1 Pending 0 17s
NAME STATUS ROLES AGE VERSION
node/ip-192-168-14-42.ec2.internal Ready <none> 12m v1.22.6-eks-7d68063
node/ip-192-168-93-33.ec2.internal Unknown <none> 13s
After a couple of seconds the new node are “Ready” and the pods are not in pending status, they are running.
> kubectl get pod,nodes
NAME READY STATUS RESTARTS AGE
pod/inflate-6b88c9fb68-kvr78 1/1 Running 0 64s
pod/inflate-6b88c9fb68-qckpd 1/1 Running 0 64s
pod/inflate-6b88c9fb68-wk4wk 1/1 Running 0 64s
NAME STATUS ROLES AGE VERSION
node/ip-192-168-14-42.ec2.internal Ready <none> 12m v1.22.6-eks-7d68063
node/ip-192-168-93-33.ec2.internal Ready <none> 61s v1.22.6-eks-7d68063