Thursday, November 28, 2024

Azure Kubernetes Service (AKS): Creating and Connecting AKS Cluster

Description:  In this blog, we are going to discuss about the blow point

 

  • Azure Kubernetes Service (AKS)
  • AKS features and benefits
  • Steps to Create Kubernetes Cluster in Azure
  • Connect to the Azure Kubernetes Cluster


Azure Kubernetes Service (AKS)

Azure Kubernetes Service (AKS) is a managed Kubernetes service where Azure manages the master node, while end users are responsible for managing the worker nodes.

With AKS, users can deploy, scale, and manage Docker containers and container-based applications across a cluster of container hosts. A key advantage of AKS is its cost-effectiveness—you only pay for the worker nodes in your clusters, not for the master nodes.

Clusters can be created using various methods, including:

  • The Azure portal
  • The Azure CLI
  • Template-driven deployment options, such as Azure Resource Manager templates and Terraform

AKS Features and Benefits

  • Managed Kubernetes: AKS is a fully managed Kubernetes service provided by Microsoft Azure. It eliminates the need for manual setup, configuration, and maintenance of Kubernetes clusters. Microsoft takes care of the underlying infrastructure, including control plane management, security patches, and updates, allowing you to focus on deploying and managing your applications.
  • Scalability and Elasticity: AKS enables horizontal scaling of applications by automatically adjusting the number of pod replicas based on workload demands. It supports dynamic scaling to handle increased traffic or resource requirements, ensuring optimal performance and resource utilisation.
  • Integrated Developer Tools: AKS seamlessly integrates with Azure DevOps and other popular development tools, facilitating continuous integration and continuous deployment (CI/CD) workflows. It provides integrations with Azure Container Registry (ACR) for easy container image storage and deployment.
  • High Availability and Reliability: AKS provides built-in high availability features, such as multiple availability zones (in supported regions), cluster auto-repair, and automatic upgrades. These features help ensure that your applications are resilient and available even in the event of infrastructure failures or planned maintenance.
  • Security and Compliance: AKS incorporates Azure security features, including Azure Active Directory integration, role-based access control (RBAC), and network security groups. It helps secure your containerised applications and data, ensuring compliance with regulatory requirements.
  • Monitoring and Diagnostics: AKS integrates with Azure Monitor, Azure Log Analytics, and other monitoring tools, providing visibility into your cluster’s health, performance, and logs. You can monitor container metrics, view logs, and set up alerts for proactive issue detection and troubleshooting.
  • Hybrid and Multi-Cloud Support: AKS enables hybrid and multi-cloud deployments by integrating with Azure Arc. This allows you to manage and govern AKS clusters across multiple environments, including on-premises and other cloud providers.


Steps to create Kubernetes cluster in Azure

 There are multiple ways to create the Kubernetes cluster in Azure 

  • Azure Portal
  • Azure CLI
  • Azure PowerShell
  • Using template-driven deployment options, like Azure Resource Manager templates and Terraform


In this blog, we are going to create using Azure portal.  

First step to search for Kubernetes in the Azure portal, click on Kubernetes Services



Click on + Create ==> Kubernetes Cluster 


Update all the required details 


  • Give the Resource Group name as per your requirement.
  • Specify a name to your cluster in the Kubernetes cluster name field.
  • Choose a Region in which you want to create your AKS cluster. In the specified region, our master node will be created.
  • Based on the region the select the availability zones.
  • Select the Kubernetes Version.  Here I am choosing the default, i.e., 1.30.6





Select Node Pools configuration instance type



change the size of node size to D2s v3 or as per your requirement and Min- Max  node count 





Change the network configuration as per requirement 




Setup Integration with container registry 






Setup the monitoring 






Security 



Advance 




Review + Create 



Once it will created you will get the below screen 



Click on Go to resource where you get the kubernetes service 



Click on connect to get instruction to connect the service , in this demo i m going to use azure cli 





After configure all the command run the kubernetes command, you will get the results 

$ kubectl get nodes




For test purpose just deployed nginx pod 


You can list out the node pool


Congratulations, you can use AKS

Tuesday, November 26, 2024

Setup Kubernetes cluster with EC2 instance (Ubuntu 22)



Description:   In this blog We are going to setup Kubernetes Cluster with EC2 instance 


Below is the diagram for the setup 



There are many ways to setup Kubernetes Cluster 

  1.  Install Kubernetes using Minikube
  2.  Install Kubernetes using Kubeadm
  3. Install Kubernetes Using Terraform
  4.  Install Kubernetes using Kubernetes Operations (kops)

  • AWS EKS
  • Google K8s Engine
  • Azure K8s Service

In this example, we are going to setup the K8s cluster with Kubeadm [option-2]. 

Kubeadm is a tool designed to bootstrap a full-scale Kubernetes cluster. It takes care of all heavy lifting related to cluster provisioning and automates the process completely. 


In the deployment of Kubernetes clusters, two server types are used:


Master:

A Kubernetes Master is responsible for managing the Kubernetes cluster. It handles API calls related to cluster components like pods, replication controllers, services, and nodes. Key components of the master include:

  • Kube-API Server
  • Kube-Controller-Manager
  • Etcd
  • Kube-Scheduler

Node:

A Node provides the run-time environment for containers. It is a worker machine where the actual workloads run. A Kubernetes cluster typically has multiple nodes, and a collection of container pods can span across these nodes.


Server Specification

Server-typeHostnameSpecification
Masterk8s-ubuntu-master-nodet2.medium [4 GB RAM, 2 CPU, 30 GB Disk]
Worker-node-1k8s-ubuntu-worker-node-1t2.medium [4 GB RAM, 2 CPU, 30 GB Disk]
Worker-node-2k8s-ubuntu-worker-node-2t2.medium [4 GB RAM, 2 CPU, 30 GB Disk]


 In order to create K8s cluster, the following minimum requirements are needed:

Memory:

  • 2 GiB or more of RAM per instance

CPUs:

  • At least 2 CPUs on the control plane instance
Launch AWS instances: In this example, I have launch 3 instance with above specification and Ubuntu 22 image.

Below is the security group  for master and worker instance

Master:






Worker:







Install K8s cluster on Ubuntu 22


Setup Master and Worker Node:  Run below shell script in Master and Worker Node to setup the pre-requisites and kubeadm. Copy below bash script in Master and Worker machine 


Ref. Github URL:   https://github.com/harpal1990/setup-k8-Ec2


curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

 echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check

 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

 chmod +x kubectl
 mkdir -p ~/.local/bin
 mv ./kubectl ~/.local/bin/kubectl
 # and then append (or prepend) ~/.local/bin to $PATH

 kubectl version --client

# disable swap
sudo swapoff -a

# Create the .conf file to load the modules at bootup
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

## Install CRIO Runtime
sudo apt-get update -y
sudo apt-get install -y software-properties-common curl apt-transport-https ca-certificates gpg

sudo curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.list

sudo apt-get update -y
sudo apt-get install -y cri-o

sudo systemctl daemon-reload
sudo systemctl enable crio --now
sudo systemctl start crio.service

echo "CRI runtime installed successfully"

# Add Kubernetes APT repository and install required packages
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update -y
sudo apt-get install -y kubelet="1.29.0-*" kubectl="1.29.0-*" kubeadm="1.29.0-*"
sudo apt-get update -y
sudo apt-get install -y jq

sudo systemctl enable --now kubelet
sudo systemctl start kubelet


Setup Master Node [Only]: 

Initialise the Kubernetes Master Node, Copy the below script and run in master node 

# ./k8-master-setup.sh


sudo kubeadm config images pull sudo kubeadm init mkdir -p "$HOME"/.kube sudo cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config sudo chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config # Network Plugin = calico kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml





Generate a token for worker nodes to join: Run below command in master node to get the command to join the worker node 

# kubeadm token create --print-join-command



Run same command  in both nodes to join the machines into kubernetes



Run the below command to get the node details after join in the kubernetes 
# kubectl get nodes



Congratulations, K8s is ready now you can setup the micro service infrastructure 

Saturday, July 13, 2024

How to Set Up Blue Green Deployment on an EC2 Instance for an Angular Application Using AWS CodeDeploy Pipeline with GitHub [Part - 2 ]

Description: In this previous blog, we'll walk through setting up a In-placed standalone EC2 instance deployment. In this blog we are going to design Deployment strategy for an Angular application hosted on Amazon EC2 instances with auto-scaling and load balancer with in-placed and blue green deployment

 

  

 

 

 Step-1 Setup AMI image for auto-scaling

First step is to setup image for the auto-scaling, So we are going to use same machine, as we have setup in previous blog for standalone deployment.

In the existing machine I am going to remove all the content from the webroot       [/var/www/my-angular]



 

After cleaning the web-root directory, create the AMI image from the AWS console


Fill the details and click on create


Once image created and available in image list



Step2: Setup Auto-scaling Group and Launch template

Navigate to Auto-Scaling Group  --> Create Auto Scaling Group



Create launch template, select the image which we have created. Also define instance type, key and security group.


Once all the details filled click on create launch template, it shows the output as follow


Select the launch template from the list 


select vpc and subnets


Create load balancer and attached it to auto-scaling group. Load balancer is internet-facing



Define the scaling capacity as I have defined as follow

min: 2
Desired:2
max: 3 



Review and create the auto-scaling group


We forgot to add IAM role, So create new version of template and add it 


after create the version edit in template and change it

 
 
 
Once version updated, delete the existing machine and wait for new instance launch


Step3: Create application and deployment group with auto-scaling 

Navigate to CodeDeploy -->Deploy --> Application --> Create Application



 



After create application, create the deployment group

Fill all require details



Select the load balancer and click on create deployment group


Step4: Change the deployment group in pipeline


 


Save the pipeline and Release change. After completion of pipeline you get the result


 

Browse the load balancer url, you will get the output


Step5: Deploy application using  Bule-green deployment

For blue-green deployment create application
 


 
Create deployment group for blue-green




Set environment configuration


Change Deployment Settings and create deployment group


Once the deployment group created, Edit it deployment stage and change it to blue-green deployment and save it




After change release the change it We can get the list of additional instance in deployment



After instance update, the new instance replacement


Once all the thing validate we can terminate the old instance