Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Tuesday, September 12, 2023

Deploying a application in Kubernetes Cluster with Amazon EKS

 Description: Here I have explained, what is Amazon EKS? How to deploy Kubernetes Cluster with Amazon EKS

What is Amazon EKS?
Amazon EKS [Elastic Container Service for Kubernetes] is the managed Kubernetes service which allows you to run Kubernetes on AWS. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks.

Prerequisites 
Below are the prerequisites to setup Amazon EKS

  • AWS CLI: You will need version 1.16 at least. Follow the URL for setup and version details
  • Kubectl: This command is used for communicating with the cluster API server. Refer URL to setup and version details
  • AWS IAM Authenticator: To allow authentication with kubernetes cluster need to setup IAM user for kubernetes 


AWS EKS ROLE: First we need to setup EKS role in IAM. To create the IAM role perform the below steps

  • Open IAM service in AWS and navigate to Roles 
  • Click on Create role 


  • Filled all the required details in form, Select AWS Service,  in service use case select EKS and select EKS-cluster in specified service as in screenshot


  • You will get the permission list in the review page

  •  Give name the cluster "eksClusterRole" and validate the trust entities as screenshot and create the  role




AWS IAM User: After creating AWS EKS role with eksClusterRole name now we need to create and setup IAM user in the local machine to run the AWS commands 

  • To create the user navigate to IAM --> Users --> Create User 
  • Fill all the required details and create the user with Administrator access privileges and EKS service privileges


     
  • In the example I create user with the name AWSEKSUSER
  • After create the user I am going to add access key to access the AWS APIs using CLI. So navigate to the user properties and click on create Access Keys. Make sure to download the csv file of credentials 
  • Once the user created setup user in local instance AWS CLI using aws configure command line utility 




Kubectl: Kubectl utility is command line utility that used to communicate with Kubernetes API server.  Here is the URL to setup Kubectl utility

You can verify the version of kubectl using kubectl version command 


IAM Role for EKS Node: We also need IAM role for EKS worker node. So to create the role again navigate to IAM and create the role with required permission.

Rolename: AmazonEKSNodeRole




Setup Amazon EKS cluster: After full filled all the prerequisites we are going to setup AWS EKS cluster.

  • Navigate to EKS service in AWS and click on Add Cluster
  • It will pop-up form to create the cluster filled all the required details and select eksClusterRole which we have created previously





  • Need to select VPC and subnets, So in this example we used default VPC in us-east-1 region 



  • Select public interface for cluster



  • Now select the element which you want to enable the logs. But in the example I am not going to enabled anything 



  • Select the add-ons which need to add with the setup. So I am taking the default add-ons 



  • Select the version for each add-ons. In this example I used the default versions



  • Review the details and click on create. It will take around 20 minutes so wait until the process completed. 


  • Once the process completed you will find active status in cluster


Node Group
  • To create the node group navigate to newly created cluster --> Compute tab
  • Under Compute tab option to create "Add node group"


  • Once you click on Add node group you will find form for the node. You need to fill node-group name and IAM role which created for node and click on next 



  • Next step to Set configuration for node group. Fill all required details like AMI, instance type, scaling configuration etc. In this example I used Amazon EC2 AMI, t3.Medium instance type, disk size 20 GB and scaling group 2 for each




  • Next select the sub nets included in node. It will automatically selected the required sub nets. 
  • Review all the details and click on create. It also took few minutes to create it 



  • Once the node group created it shows in active status 



You will get the nodes under the cluster as follow



Configure EKS cluster in AWS CLI: After performing all the above steps now need to configure and manage EKS cluster in AWS CLI. In previous steps we configured AWS cli in the local linux machine. So we used same IAM user to manage the EKS cluster.

First step to configure EKS cluster using AWS Cli, below is the command to configure same

$ aws eks --region us-east-1 update-kubeconfig  --name example-cluster

Note: In above command need to change the region and the name of the cluster. In this example we created cluster with the name example-cluster. Once the command executed the configuration exported to .kube/config file 


Once account configured you can get nodes details from the cluster using command line

$ kubectl get nodes



Setup K8-Application

Once all the configuration done need to checkout the K8 application from the GitHub repository. To checkout the application create one folder and checkout the master branch from the give GitHub repository

https://github.com/harpal1990/k8-application.git

once repository checkout navigate to k8s-specifications directory under repo. You will find different files relevant to kubernetes application 


Now I am going to run each yaml file one by one in a sequence as follow


$ kubectl create -f voting-app-deploy.yaml
$ kubectl create -f voting-app-service.yaml
$ kubectl create -f redis-deploy.yaml
$ kubectl create -f redis-service.yaml
$ kubectl create -f postgres-deploy.yaml
$ kubectl create -f postgres-service.yaml
$ kubectl create -f worker-app-deploy.yaml
$ kubectl create -f result-app-deploy.yaml
$ kubectl create -f result-app-service.yaml





You will get the result of deployment and service using below command

$ kubectl get deployments,svc


In the above screenshot we can see all the services pod up and running with 2 load balancer type service 

First I am opening the voting-service public url for voting application and select the cats for vote

http://a25ca28a3348a407e8bdd3f912145b48-1964993370.us-east-1.elb.amazonaws.com/




Now after vote open the result application for result using below url

http://a8a7f7a6a4e4044dd89c1f08ecb5707c-548992896.us-east-1.elb.amazonaws.com/






So in this way we can deploy kubernetes application to Amazon EKS



Monday, October 31, 2022

Rotate Keys on multiple EC2 instances using Ansible

Description: Here I have explained, How to rotate/replace key pairs on EC2 instances using Ansible

Setup:

  • 2 Instances with same key pair 
  • 2 Key pair [one is existing and one is new]
  • IAM user with Administrator privilege

Procedure:

After create 2 Virtual Machines with same Key pair, create one more new key pair file for replacement.
Key name are as follow:
  • Old-key.pem [current key]
  • New-Key.pem [New key] 
Now I am creating one IAM user with Administrator privilege from AWS console as follow




 


















After creating user, download CSV file for reference in variable file for authentication. 

First creating variable file with key.vars file as follow:

access_key: "XXXXXX5UIGMCGDXXXXXX" secret_key: "XXXXXXVxsLGhbdrqz+I2IhnnrG+XXXXXXX" region: "us-west-2" #----> Example: "ap-south-1" old_key: "Old-key" #----> Upload this Pem file in the same directory with 400 Permission. new_key: "New-Key" system_user: "ubuntu" ssh_port: 22

Note:
  •  access_key = IAM user access key
  •  secret_key = IAM user secret key
  •  region = Infrastructure host region 
  •  old_key = current / existing key name 
  •  new_key = new key which need to replace
  •  system_user = ubuntu [ I have used ubuntu as operating system so default user is ubuntu]
After creating variable files, changing the both key file permission to 0400 using chmod command line

Now going to create main.yml as follow for replacement of key as follow



--- - name: "Creation of the Ansible Inventory Of EC2 Instances in which Key To Be Rotated" hosts: localhost vars_files: - key.vars tasks: # --------------------------------------------------------------- # Getting Information of the EC2 instances in which Key To Be Rotated # --------------------------------------------------------------- - name: "Fetching Details About EC2 Instance" ec2_instance_info: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" region: "{{ region }}" filters: "key-name": "{{ old_key }}" instance-state-name: [ "running"] register: ec2 # ------------------------------------------------------------ # Creating Inventory Of EC2 With Old SSH-keyPair # ------------------------------------------------------------ - name: "Creating Inventory " add_host: name: "{{ item.public_ip_address }}" groups: "aws" ansible_host: "{{ item.public_ip_address }}" ansible_port: "{{ ssh_port }}" ansible_user: "{{ system_user }}" ansible_ssh_private_key_file: "{{ old_key }}.pem" ansible_ssh_common_args: "-o StrictHostKeyChecking=no" with_items: - "{{ ec2.instances }}" no_log: true - name: "Updating SSH-Key Material" hosts: aws become: true gather_facts: false vars_files: - key.vars tasks: - name: "Register current SSH Authorized_key file of the system user" shell: cat /home/"{{system_user}}"/.ssh/authorized_keys register: oldauth - name: "Creating New SSH-Key Material" delegate_to: localhost run_once: True openssh_keypair: path: "{{ new_key }}" type: rsa size: 4096 state: present - name: "Adding New SSH-Key Material" authorized_key: user: "{{ system_user }}" state: present key: "{{ lookup('file', '{{ new_key }}.pub') }}" - name: "Creating SSH Connection Command" set_fact: ssh_connection: "ssh -o StrictHostKeyChecking=no -i {{ new_key }} {{ ansible_user }}@{{ ansible_host }} -p {{ ansible_port }} 'uptime'" - name: "Checking Connectivity To EC2 Using Newly Added Key" ignore_errors: true delegate_to: localhost shell: "{{ ssh_connection }}" - name: "Executing the Uptime command on remote servers" command: "uptime" register: uptimeoutput - debug: var: uptimeoutput.stdout_lines - name: "Removing Old SSH Public Key and adding New SSH Public Key to authorized_key" authorized_key: user: "{{ system_user }}" state: present key: "{{ lookup('file', '{{ new_key }}.pub') }}" exclusive: true - name: "Print Old authorized_keys file" debug: msg: "SSH Public Keys in Old authorized_keys file are '{{ oldauth.stdout }}'" - name: "Print New authorized_keys file" shell: cat /home/"{{system_user}}"/.ssh/authorized_keys register: newauth - debug: msg: "SSH Public Keys in New authorized_keys file are '{{ newauth.stdout }}'" - name: "Renaming new Private Key Locally" delegate_to: localhost run_once: True shell: | mv {{ new_key }} {{ new_key }}.pem chmod 400 {{ new_key }}.pem - name: "Removing Old SSH public key From AWS Account" delegate_to: localhost run_once: True ec2_key: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" region: "{{ region }}" name: "{{ old_key }}" state: absent - name: "Adding New SSH public key to AWS Account" delegate_to: localhost run_once: True ec2_key: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" region: "{{ region }}" name: "{{ new_key }}" key_material: "{{ lookup('file', '{{ new_key }}.pub') }}" state: present

After saving above file, open terminal and run ansible playbook using command line as follow. Kindly note run ansible command as root user with sudo rights 

# ansible-playbook main.yml





















Now verify the ssh connection with new keys.

Tuesday, December 28, 2021

Create ECS cluster using Amazon ECS CLI

Description: In the previous topic, I explained how to create containers using ECS in the AWS console. In this topic, I am going to explain how to create ECS clusters using Amazon ECS CLI.

Pre-requisites: 

  • IAM user with AdministratorAccess
  • Keypair for accessing the container machine
  • Install Amazon ECS CLI. We can install by referencing this link
  • Install and configure Amazon CLI. We can configure by referencing this link
Configure the Amazon ECS CLI

After installing Amazon ECS CLI and Amazon CLI. Configure Amazon account either by this link or you can configure it manually by using the aws configure command as follow















After user configuration, I am checking the ecs-cli version for verification as follow











Creating cluster configuration

ecs-cli configure --cluster HTTPDCluster --default-launch-type EC2 --config-name HTTPDConfig --region us-east-1

Here

HTTPDCluster = Cluster Name 
HTTPDConfig  = ECS configuration Name
us-east-1            = Region Name









Creating profile using your access key and secret key

ecs-cli configure profile --access-key AWS_ACCESS_KEY_ID --secret-key AWS_SECRET_ACCESS_KEY --profile-name HTTP-Profile











Create Cluster in ECS: After setting up cluster configuration, Now it's time to create the cluster in ECS using the ecs-cli up command. There are many options to configure different aspects like the number of instances, Security group, open port, key-pair to access ssh of the machine, etc... 

Here, I have defined, VPC subnets and security groups because I have already created them. If you don't define it. It will create it automatically. 

ecs-cli up --keypair docker --capability-iam --size 1 --vpc vpc-64ff7119 --subnets subnet-ba16908b,subnet-658e8c28 --instance-type t2.micro --cluster-config HTTPDConfig --ecs-profile HTTP-Profile -security-group sg-0e3d5d8b98b008d77 --port 80








You will find a cluster under ECS



















Create Docker Compose file: After creating the cluster, I am going to create Docker compose file to create an Apache container from the Docker hub image which we have created in the previous blog.

Here is the docker-compose file with version 3 docker-compose.yml it publishes port 80 and uses a custom docker hub repository.


version: '3' services: web: image: harpalgohilon/opensource:httpd ports: - "80:80" logging: driver: awslogs options: awslogs-group: HTTPD-tutorial awslogs-region: us-east-1 awslogs-stream-prefix: web

In version '3', CPU and memory need to define in a separate configuration file. So created a new file for the same with the name ecs-config.yml as follow:

version: 1 task_definition: services: web: cpu_shares: 100 mem_limit: 524288000

Deploy Container file to the cluster 

ecs-cli compose up --create-log-groups --cluster-config HTTPDConfig --ecs-profile HTTP-Profile







You can find task details with URL



















Browse this URL, you will get a custom page that I have used in the docker image



Thursday, December 23, 2021

Create ECS container with EC2 instance using custom docker hub image

Description: Here I have explained, How to cerate container with EC2 instance using docker hub custom image. 

In the previous topic, I have explained to create containers in ECS serverless. In this topic, I will create the container with an EC2 instance.

Open the ECS console and click on Create Cluster












Click on EC2 Linux + Networking and click Next Step












Fill in all the required details and click on create 



After some time container will create as follow
Also, it found in ECS instance under cluster



Create Task Definition for cluster







Select EC2 instance and click on next












Add container 













Fill in all the required details and click on Add












Add storage for additional mount point EFS to mount point 












For additional volume, I have created one EFS and mounted it to the container  as follow













After completion you will find EC2 instance with container