Nahuel Hernandez

Nahuel Hernandez

Another personal blog about IT, Automation, Cloud, DevOps and Stuff.

Creating a K8S Cluster The AWS Way

I created a lot of Kubernetes clusters using Terraform and Rancher. However, I prefer to use EKSCTL to create and manage K8S Clusters on AWS. EKSCTL is the official CLI for Amazon EKS and simplifies many things. On the other hand, it is pretty easy to upgrade the clusters, integrate with others features as ClusterAutoescaler, or configure secrets with KMS.

9-Minute Read

EKSCTL

EKSCTL it is written in Go and makes use of CloudFormation under the hood. Also, it allows us to specify a manifest to replicate if we want to, and we can add it to our codebase just as almost any other IAC does.

In my opinion, EKSCTL is the easiest and the best way to create EKS clusters.

Main Features:

  • Create, get, list and delete clusters
  • Create, drain and delete nodegroups
  • Scale a nodegroup
  • Update a cluster
  • Configure VPC Networking
  • Configure access to API endpoints
  • Spot instances and mixed instances
  • IAM Management and Add-on Policies
  • Write kubeconfig file for a cluster

In this tutorial, you learn how to:

  • Installing eksctl
  • Creating a Custom EKS Cluster
  • Deploy a testing application
  • Upgrade the K8S cluster
  • Configuring KMS key for envelope Secrets encryption
  • Delete the Cluster

Prerequisites:

  • Kubectl
  • AWS CLI
  • AWS Account

Installing eksctl

First, we need to install eksctl, it’s pretty easy.

> curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
> sudo mv /tmp/eksctl /usr/local/bin
> eksctl version
0.76.0

Creating a custom EKS Cluster

I will create an EKS Cluster with Managed nodes, another option is to create using Fargate/Serverless, anyway after our cluster is deployed we can add other node types. As I said, I will use a manifest to create the cluster. cluster-sandbox.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: test-sandbox
  region: us-east-1
  version: "1.20"

nodeGroups:
  - name: test-sandbox-1
    instanceType: t3.medium
    desiredCapacity: 3

Now we can create the cluster.

> eksctl create cluster -f cluster-sandbox.yaml

Note: Launching EKS and all the dependencies will take approximately 15 minutes

When the task finalize we could test the cluster

> kubectl get nodes
NAME                             STATUS   ROLES    AGE     VERSION
ip-192-168-20-116.ec2.internal   Ready    <none>   3m57s   v1.20.11-eks-f17b81
ip-192-168-20-170.ec2.internal   Ready    <none>   3m59s   v1.20.11-eks-f17b81
ip-192-168-41-91.ec2.internal    Ready    <none>   3m58s   v1.20.11-eks-f17b81

> kubectl get namespaces
NAME              STATUS   AGE
default           Active   16m
kube-node-lease   Active   16m
kube-public       Active   16m
kube-system       Active   16m

Deploy a testing application

For testing the cluster we can deploy an application and try to access it.

> kubectl create deploy nginx --image nginx
> kubectl expose deployment nginx --port 80 --type=LoadBalancer
> kubectl get svc nginx
NAME    TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
nginx   LoadBalancer   10.100.245.18   a9cf846c31d6f4f13bc8a4e4bbfc21bb-2049526190.us-east-1.elb.amazonaws.com   80:30358/TCP   23s

The external-ip value is the public ELB, now we can test it.

> curl a9cf846c31d6f4f13bc8a4e4bbfc21bb-2049526190.us-east-1.elb.amazonaws.com  
<!DOCTYPE html>                                                                         
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

It shows the Nginx default webpage, everything is OK.

Upgrade the K8S Cluster

I created the cluster with the version 1.20 to have the possibility to upgrade it. Now we can try to upgrade to 1.21 version. To upgrade the cluster we need to do it in 3 steps:

  • Upgrading control plane
  • Replacing each of the nodegroups by creating a new one and deleting the old one
  • Updating default add-ons (kube-proxy, aws-node, coredns)

Upgrading EKS Control Plane

First we need to change the metadata.version from “1.20” to “1.21” in our cluster-sandbox.yaml file after that we can execute:

> eksctl upgrade cluster -f eks-sandbox.yaml
2022-01-11 03:38:52 [ℹ]  eksctl version 0.76.0
2022-01-11 03:38:52 [ℹ]  using region us-east-1
2022-01-11 03:38:52 [!]  NOTE: cluster VPC (subnets, routing & NAT Gateway) configuration changes are not yet implemented
2022-01-11 03:38:55 [ℹ]  (plan) would upgrade cluster "test-sandbox" control plane from current version "1.20" to "1.21"
2022-01-11 03:38:56 [ℹ]  re-building cluster stack "eksctl-test-sandbox-cluster"
2022-01-11 03:38:56 [✔]  all resources in cluster stack "eksctl-test-sandbox-cluster" are up-to-date
2022-01-11 03:38:57 [ℹ]  checking security group configuration for all nodegroups
2022-01-11 03:38:57 [ℹ]  all nodegroups have up-to-date cloudformation templates
2022-01-11 04:17:03 [✔]  cluster "test-sandbox" control plane has been upgraded to version "1.21"
2022-01-11 04:17:03 [ℹ]  you will need to follow the upgrade procedure for all of nodegroups and add-ons
2022-01-11 04:17:05 [ℹ]  re-building cluster stack "eksctl-test-sandbox-cluster"
2022-01-11 04:17:05 [✔]  all resources in cluster stack "eksctl-test-sandbox-cluster" are up-to-date
2022-01-11 04:17:06 [ℹ]  checking security group configuration for all nodegroups
2022-01-11 04:17:06 [ℹ]  all nodegroups have up-to-date cloudformation templates

Note: The only values allowed for the –version and metadata.version arguments are the current version of the cluster or one version higher.

Check the control plane version

> eksctl get cluster test-sandbox                                                                                                                                   
2022-01-11 04:32:30 [ℹ]  eksctl version 0.76.0
2022-01-11 04:32:30 [ℹ]  using region us-east-1
NAME		VERSION	STATUS	CREATED			VPC			SUBNETS													SECURITYGROUPS		PROVIDER
test-sandbox	1.21	ACTIVE	2022-01-11T05:54:33Z	vpc-0c578f91bdd2e36f7	subnet-06c39ff554f66b92c,subnet-097097f76aa1eee3c,subnet-0be0d7d07d0948b1b,subnet-0be774ba91b34061d	sg-073219af01132957d	EKS

Now we have the control plane in the 1.21

Replacing each of the nodegroups by creating a new one and deleting the old one

We can create a new nodegroup if we edit the cluster-sandbox.yaml and editing the nodegroup name, also I want to add the minSize and maxSize values to use later with the CluterAutoscaler.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: test-sandbox
  region: us-east-1
  version: "1.21"

nodeGroups:
  - name: test-sandbox-2
    instanceType: t3.medium
    desiredCapacity: 3

And now we can create the new nodegroup with the 1.21 version.

> eksctl create nodegroup -f cluster-sandbox.yaml                                       
2022-01-11 04:46:50 [ℹ]  eksctl version 0.76.0
2022-01-11 04:46:50 [ℹ]  using region us-east-1
2022-01-11 04:46:56 [ℹ]  nodegroup "test-sandbox-2" will use "ami-019904275ee6b71a3" [AmazonLinux2/1.21]
2022-01-11 04:46:58 [ℹ]  1 existing nodegroup(s) (test-sandbox-1) will be excluded
2022-01-11 04:46:58 [ℹ]  1 nodegroup (test-sandbox-2) was included (based on the include/exclude rules)
2022-01-11 04:46:58 [ℹ]  will create a CloudFormation stack for each of 1 nodegroups in cluster "test-sandbox"
2022-01-11 04:46:59 [ℹ]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create nodegroup "test-sandbox-2" } } 
2022-01-11 04:46:59 [ℹ]  checking cluster stack for missing resources
2022-01-11 04:47:00 [ℹ]  cluster stack has all required resources
2022-01-11 04:47:00 [ℹ]  building nodegroup stack "eksctl-test-sandbox-nodegroup-test-sandbox-2"
2022-01-11 04:47:00 [ℹ]  deploying stack "eksctl-test-sandbox-nodegroup-test-sandbox-2"
2022-01-11 04:47:00 [ℹ]  waiting for CloudFormation stack "eksctl-test-sandbox-nodegroup-test-sandbox-2"
2022-01-11 04:50:59 [ℹ]  no tasks
2022-01-11 04:50:59 [ℹ]  adding identity "arn:aws:iam::452699334572:role/eksctl-test-sandbox-nodegroup-test-sa-NodeInstanceRole-1ST0X462TQB83" to auth ConfigMap
2022-01-11 04:51:00 [ℹ]  nodegroup "test-sandbox-2" has 0 node(s)
2022-01-11 04:51:00 [ℹ]  waiting for at least 3 node(s) to become ready in "test-sandbox-2"
2022-01-11 04:51:42 [ℹ]  nodegroup "test-sandbox-2" has 3 node(s)
2022-01-11 04:51:42 [ℹ]  node "ip-192-168-15-18.ec2.internal" is ready
2022-01-11 04:51:42 [ℹ]  node "ip-192-168-57-175.ec2.internal" is ready
2022-01-11 04:51:42 [ℹ]  node "ip-192-168-7-160.ec2.internal" is ready
2022-01-11 04:51:42 [✔]  created 1 nodegroup(s) in cluster "test-sandbox"
2022-01-11 04:51:42 [✔]  created 0 managed nodegroup(s) in cluster "test-sandbox"
2022-01-11 04:51:45 [ℹ]  checking security group configuration for all nodegroups
2022-01-11 04:51:45 [ℹ]  all nodegroups have up-to-date cloudformation templates

We can check the nodes version:

> kubectl get nodes                                                                                                                                               
NAME                             STATUS   ROLES    AGE    VERSION
ip-192-168-15-18.ec2.internal    Ready    <none>   107s   v1.21.5-eks-bc4871b
ip-192-168-20-116.ec2.internal   Ready    <none>   97m    v1.20.11-eks-f17b81
ip-192-168-20-170.ec2.internal   Ready    <none>   97m    v1.20.11-eks-f17b81
ip-192-168-41-91.ec2.internal    Ready    <none>   97m    v1.20.11-eks-f17b81
ip-192-168-57-175.ec2.internal   Ready    <none>   107s   v1.21.5-eks-bc4871b
ip-192-168-7-160.ec2.internal    Ready    <none>   111s   v1.21.5-eks-bc4871b

We have 3 nodes with v1.21 and 3 nodes with v1.20. We need to clean the old nodes.

Once we have new nodegroups in place, you can delete old ones,

> eksctl delete nodegroup -f cluster-sandbox.yaml --only-missing --approve                                               
2022-01-11 04:56:06 [ℹ]  eksctl version 0.76.0
2022-01-11 04:56:06 [ℹ]  using region us-east-1
2022-01-11 04:56:06 [ℹ]  comparing 1 nodegroups defined in the given config ("eks-sandbox.yaml") against remote state
2022-01-11 04:56:08 [ℹ]  1 nodegroup(s) present in the config file (test-sandbox-2) will be excluded
2022-01-11 04:56:08 [ℹ]  1 nodegroup (test-sandbox-1) was included (based on the include/exclude rules)
2022-01-11 04:56:09 [ℹ]  will drain 1 nodegroup(s) in cluster "test-sandbox"
2022-01-11 04:56:10 [ℹ]  cordon node "ip-192-168-20-116.ec2.internal"
2022-01-11 04:56:11 [ℹ]  cordon node "ip-192-168-20-170.ec2.internal"
2022-01-11 04:56:11 [ℹ]  cordon node "ip-192-168-41-91.ec2.internal"
2022-01-11 04:56:11 [!]  ignoring DaemonSet-managed Pods: kube-system/aws-node-s224p, kube-system/kube-proxy-rv7zt
2022-01-11 04:56:31 [✔]  drained all nodes: [ip-192-168-20-116.ec2.internal ip-192-168-20-170.ec2.internal ip-192-168-41-91.ec2.internal]
2022-01-11 04:56:31 [ℹ]  will delete 1 nodegroups from cluster "test-sandbox"
2022-01-11 04:56:32 [ℹ]  1 task: { 1 task: { delete nodegroup "test-sandbox-1" [async] } }
2022-01-11 04:56:32 [ℹ]  will delete stack "eksctl-test-sandbox-nodegroup-test-sandbox-1"
2022-01-11 04:56:32 [ℹ]  will delete 1 nodegroups from auth ConfigMap in cluster "test-sandbox"
2022-01-11 04:56:33 [ℹ]  removing identity "arn:aws:iam::452699334572:role/eksctl-test-sandbox-nodegroup-test-sa-NodeInstanceRole-7192J2N06D4K" from auth ConfigMap (username = "system:node:{{EC2PrivateDNSName}}", groups = ["system:bootstrappers" "system:nodes"])
2022-01-11 04:56:33 [✔]  deleted 1 nodegroup(s) from cluster "test-sandbox"

Now we can check the nodes again, and we will have only 3 nodes with v1.21

> k get nodes                                                       
NAME                             STATUS   ROLES    AGE     VERSION
ip-192-168-15-18.ec2.internal    Ready    <none>   6m48s   v1.21.5-eks-bc4871b
ip-192-168-57-175.ec2.internal   Ready    <none>   6m48s   v1.21.5-eks-bc4871b
ip-192-168-7-160.ec2.internal    Ready    <none>   6m52s   v1.21.5-eks-bc4871b

Updating default add-ons

There are 3 default add-ons that get included in each EKS cluster, the process for updating each of them is different, hence there are 3 distinct commands that you will need to run.

> eksctl utils update-kube-proxy --config-file cluster-sandbox.yaml --approve
> eksctl utils update-aws-node --config-file cluster-sandbox.yaml --approve
> eksctl utils update-coredns --config-file cluster-sandbox.yaml --approve
2022-01-11 05:03:10 [ℹ]  eksctl version 0.76.0
2022-01-11 05:03:10 [ℹ]  using region us-east-1
2022-01-11 05:03:12 [ℹ]  "kube-proxy" is now up-to-date
2022-01-11 05:03:14 [ℹ]  eksctl version 0.76.0
2022-01-11 05:03:14 [ℹ]  using region us-east-1
2022-01-11 05:03:15 [ℹ]  skipped existing "kube-system:ServiceAccount/aws-node"
2022-01-11 05:03:16 [ℹ]  replaced "ClusterRoleBinding.rbac.authorization.k8s.io/aws-node"
2022-01-11 05:03:16 [ℹ]  replaced "ClusterRole.rbac.authorization.k8s.io/aws-node"
2022-01-11 05:03:17 [ℹ]  replaced "CustomResourceDefinition.apiextensions.k8s.io/eniconfigs.crd.k8s.amazonaws.com"
2022-01-11 05:03:18 [ℹ]  replaced "kube-system:DaemonSet.apps/aws-node"
2022-01-11 05:03:18 [ℹ]  "aws-node" is now up-to-date
2022-01-11 05:03:19 [ℹ]  eksctl version 0.76.0
2022-01-11 05:03:19 [ℹ]  using region us-east-1
2022-01-11 05:03:21 [ℹ]  replaced "kube-system:Service/kube-dns"
2022-01-11 05:03:22 [ℹ]  replaced "kube-system:ServiceAccount/coredns"
2022-01-11 05:03:23 [ℹ]  replaced "kube-system:ConfigMap/coredns"
2022-01-11 05:03:23 [ℹ]  replaced "kube-system:Deployment.apps/coredns"
2022-01-11 05:03:24 [ℹ]  replaced "ClusterRole.rbac.authorization.k8s.io/system:coredns"
2022-01-11 05:03:24 [ℹ]  replaced "ClusterRoleBinding.rbac.authorization.k8s.io/system:coredns"
2022-01-11 05:03:24 [ℹ]  "coredns" is now up-to-date

Check the add-ons

> k get pod -n kube-system         
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-8k49f             1/1     Running   0          2m
aws-node-9tcmb             1/1     Running   0          1m
aws-node-z5x2m             1/1     Running   0          1m
coredns-69f6f95558-dh9xd   1/1     Running   0          2m
coredns-69f6f95558-wb79g   1/1     Running   0          2m
kube-proxy-47f2n           1/1     Running   0          2m
kube-proxy-q9t9p           1/1     Running   0          2m
kube-proxy-wb42l           1/1     Running   0          2m

Finally we finish to upgrade from v1.20 to v121 using eksctl.

Configuring KMS key for envelope Secrets encryption

EKS supports using AWS KMS keys to provide envelope encryption of Kubernetes secrets stored in EKS. Implementing envelope encryption is considered a security best practice for applications that store sensitive data and is part of a defense in depth security strategy. Once KMS encryption is enabled, it cannot be disabled or updated to use a different KMS key.

First we need to create a KMS Key

Create a KMS with the AWS CLI is pretty easy.

> aws kms create-key --tags TagKey=Environment,TagValue=Sandbox --description "EKS Secrets key"
    "KeyMetadata": {
        "AWSAccountId": "4699334572",
        "KeyId": "9679fc2f-e23-4143-91f9-6b50dc7e0ec4",
        "Arn": "arn:aws:kms:us-east-1:452699334572:key/9679fc2f-fe23-4143-91f9-6b50dc7e0ec4",
        "CreationDate": 1641880101.527,
        "Enabled": true,
        "Description": "EKS Secrets key",
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyState": "Enabled",
        "Origin": "AWS_KMS",
        "KeyManager": "CUSTOMER",
        "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
        "KeySpec": "SYMMETRIC_DEFAULT",
        "EncryptionAlgorithms": [
            "SYMMETRIC_DEFAULT"
        ],
        "MultiRegion": false
    }
}

Now we can edit our config file and add the secretsEncryption parameter

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: test-sandbox
  region: us-east-1
  version: "1.21"

nodeGroups:
  - name: test-sandbox-2
    instanceType: t3.medium
    desiredCapacity: 3

secretsEncryption:
  keyARN: arn:aws:kms:us-east-1:452699334572:key/9679fc2f-fe23-4143-91f9-6b50dc7e0ec4

To enable KMS encryption on a cluster that doesn’t already have it enabled, run

> eksctl utils enable-secrets-encryption -f cluster-sandbox.yaml

Note: In addition to enabling KMS encryption on the EKS cluster, eksctl also re-encrypts all existing Kubernetes secrets using the new KMS key by updating them with the annotation eksctl.io/kms-encryption-timestamp

Now we can create a secret and try to decode it.

Delete the Cluster

Finally we can clean the cluster,

eksctl delete cluster --name=test-sandbox

References:

Categories

Recent Posts

About

Over 15-year experience in the IT industry. Working in SysOps, DevOps and Architecture roles with mission-critical systems across a wide range of industries. Wide experience with AWS, Terraform, Kubernetes, Containers, CI/CD pipelines, and Linux. Always keeping up with the latest technologies. Passionate about automating the run of the mill. Big focus on problem-solving.