Creating Kubernetes cluster on AWS EC2 instance using Ansible roles.

Ishika Mandloi
5 min readMar 27, 2021

--

Task Description:

Ansible Role to Configure K8S MultiNode Cluster over AWS Cloud.

Before we start the actual task…

  • We will need key pair to launch the ec2 instance using ansible already in our controller node of ansible.
  • we need to create an IAM user created to login to AWS via the controller node.

Steps to complete the task:

1. Create Ansible Playbook to launch 3 AWS EC2 instances.

  • Here I have used dynamic inventory for adding EC2 instance as host for further process of Kubernetes creating cluster.
  • For dynamic inventory download files using -

“wget https://raw.githubusercontent.com/vimallinuxworld13/ansible_dynamic_inventory/master/hosts.py

  • Move these files to a folder for example “/etc/mydinv”
  • Make the files executable using

“chmod +x ec2.py” and “chmod +x ec2.ini”

  • Now export your IAM credentials in the controller node
  • In the Ansible config file, give the path of dynamic inventory, give ec2-user as a remote user, add the private key path, and also need to add privilege escalation.
  • Now we need to create a role for EC2 instances
  • Create a separate workspace for the whole task

Here, I have created “kubernetescluster”

  • create a role called ec2setup for launching EC2 instances.
  • In the tasks file of ec2setup

I have created two tasks one for the master node and one for the worker nodes.

Here, jinja is used, which will take values for variables and values for EC2 variable I have mentioned in the vars folder

  • This is the main file to call ec2setup role that will launch instances for the cluster.

Lets see if the playbook runs well …

Instances are launched…

2. Create Ansible Playbook to configure Docker, K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.

I. Master Node

  • Create a role in the same workspace for a master node.
  • Tasks file for master node

1. First we need to install and configure the Docker service in nodes.

2. We need to configure yum for Kubernetes

3. Installing Kubeadm, kubelet, kubectl which we need for creating Kubernetes cluster.

Kubeadm is a tool built to provide kubeadm init and kubeadm join, “fast paths” for creating Kubernetes clusters.

Kubelet is the primary “node agent” that runs on each node and is used to register the nodes in the Kubernetes cluster through the API server. After a successful registration, the primary role of kubelet is to create pods and listen to the API server for instructions.

The kubectl command line tool lets you control Kubernetes clusters.

4. Enable kubelet service

5. pulling kubeadm config image

6. we need to create a daemon.json to set cgroupdriver to systemd in /etc/docker as kublet require systemd as cgroupdriver. So here I have created a template of file and in task i have added the source and destination for copying the template in /etc/docker.

7. Restarting docker service as we have made some change in configurations

8. Install iproute-tc

9. Update conf of k8s to set permission

10. Starting kubeadm service to initiate cluster

11. copying config file in HOME directory

12. Creating overlay network

Here we have finished tasks for the master node.

II. Worker node

  • Create a role in the same workspace for worker nodes.
  • Tasks file for Worker nodes

The task file for Worker node

We have to do the same steps as master node, except from 10th step, because we don't need to do those steps in worker nodes.

This is the main file where we have called both role Masternode and Workernode.

  • After the Master role run we have to add a task for creating token in master node, Here I have given a command to create token and registered output in variable “token”.
  • Now, here we have one issue that is we save our token in a variable but we cannot directly pass the variable of one host playbook to the other host playbook so, we can use the concept of adding host in in-memory inventory, use variables to create new hosts and groups in inventory for use it later in the same playbook.
  • Here I have created a dummy host as linkforjoining and set a variable “link” with a variable in which we registered token i.e “token”
  • And for standard output use token[‘stdout’]
  • Now, after worker node role tasks are done we need to join cluster using token which is now stored in hostvars.
  • Register the results to see the output.

Now running the main playbook let's see if it works….

Token created…

workernodes joined…

Playbook successfully ran…

Now check for nodes in the master node by logging in to the master node.

run — “kubectl get nodes”

Successfully created Kubernetes cluster over EC2 instances.

THANK YOU FOR READING!!!

KEEP LEARNING✌✌😊

--

--