Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Thursday, December 23, 2021

Create Container in AWS ECS using custom Docker repository without EC2 instance [serverless]

Description: Here I have explained, How to create a Container in AWS ECS using console using Custom Docker image from the Docker repository.

In the previous Blog, I have explained how to create a custom image and upload it to Docker Hub. In this blog, I will use that custom image to create a container in AWS ECS.

Open ECS service and click on Get Started 











Creating custom container definition and edit task definition [if you want more resources]




Once you click on configure it will popup for details that need to fill like public repository and image details. In this example, I have used harpalgohilon/opensource:httpd

harpalgohilon = Docker hub user id 
opensource = Public Repository name
httpd = tag name 
Port = 80 














Also, you can define configuration in the Advance configuration tab like CPU, RAM, Health check, etc ...
























Now creating service and cluster after filling all the details and save












At last click on create button 












After clicking on create it will take some time to create the container. After a few moments container will create.










To check the container click on view services and it redirects to the Service dashboard.












Now click on tasks, you will get the list of tasks and its status [Running or Stopped]












Click on tasks it will show you all the details like public, private IP Status, etc...












Now browse the Public IP over browser, it will show the custom page which I have created on the last log



Saturday, May 8, 2021

Create VPC [Virtual Private Cloud] on AWS using Terraform

Description: Here I have explained, How to Create VPC along with subnet and Network ACL on AWS using Terraform

Below is the Terraform project and variable file, using this we create VPC with subnet and Network ACL 

  • Below is the VPC terraform project file 

# vi VPC.tf # Create VPC/Subnet/Security Group/Network ACL provider "aws" { access_key = var.access_key secret_key = var.secret_key region = var.region } # create the VPC resource "aws_vpc" "Tech_VPC" { cidr_block = var.vpcCIDRblock instance_tenancy = var.instanceTenancy enable_dns_support = var.dnsSupport enable_dns_hostnames = var.dnsHostNames tags = { Name = "Tech VPC" } } # end resource # create the Subnet resource "aws_subnet" "Tech_VPC_Subnet" { vpc_id = aws_vpc.Tech_VPC.id cidr_block = var.subnetCIDRblock map_public_ip_on_launch = var.mapPublicIP availability_zone = var.availabilityZone tags = { Name = "Tech VPC Subnet" } } # end resource # Create the Security Group resource "aws_security_group" "Tech_VPC_Security_Group" { vpc_id = aws_vpc.Tech_VPC.id name = "Tech VPC Security Group" description = "Tech VPC Security Group" # allow ingress of port 22 ingress { cidr_blocks = var.ingressCIDRblock from_port = 22 to_port = 22 protocol = "tcp" } # allow egress of all ports egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "Tech VPC Security Group" Description = "Tech VPC Security Group" } } # end resource # create VPC Network access control list resource "aws_network_acl" "Tech_VPC_Security_ACL" { vpc_id = aws_vpc.Tech_VPC.id subnet_ids = [ aws_subnet.Tech_VPC_Subnet.id ] # allow ingress port 22 ingress { protocol = "tcp" rule_no = 100 action = "allow" cidr_block = var.destinationCIDRblock from_port = 22 to_port = 22 } # allow ingress port 80 ingress { protocol = "tcp" rule_no = 200 action = "allow" cidr_block = var.destinationCIDRblock from_port = 80 to_port = 80 } # allow ingress ephemeral ports ingress { protocol = "tcp" rule_no = 300 action = "allow" cidr_block = var.destinationCIDRblock from_port = 1024 to_port = 65535 } # allow egress port 22 egress { protocol = "tcp" rule_no = 100 action = "allow" cidr_block = var.destinationCIDRblock from_port = 22 to_port = 22 } # allow egress port 80 egress { protocol = "tcp" rule_no = 200 action = "allow" cidr_block = var.destinationCIDRblock from_port = 80 to_port = 80 } # allow egress ephemeral ports egress { protocol = "tcp" rule_no = 300 action = "allow" cidr_block = var.destinationCIDRblock from_port = 1024 to_port = 65535 } tags = { Name = "Tech VPC ACL" } } # end resource # Create the Internet Gateway resource "aws_internet_gateway" "Tech_VPC_GW" { vpc_id = aws_vpc.Tech_VPC.id tags = { Name = "Tech VPC Internet Gateway" } } # end resource # Create the Route Table resource "aws_route_table" "Tech_VPC_route_table" { vpc_id = aws_vpc.Tech_VPC.id tags = { Name = "Tech VPC Route Table" } } # end resource # Create the Internet Access resource "aws_route" "Tech_VPC_internet_access" { route_table_id = aws_route_table.Tech_VPC_route_table.id destination_cidr_block = var.destinationCIDRblock gateway_id = aws_internet_gateway.Tech_VPC_GW.id } # end resource # Associate the Route Table with the Subnet resource "aws_route_table_association" "Tech_VPC_association" { subnet_id = aws_subnet.Tech_VPC_Subnet.id route_table_id = aws_route_table.Tech_VPC_route_table.id } # end resource # end vpc.tf

  • Below is the variable file for Terraform. In the variable file, I have used 172.16.0.0/16 CIDR 
# variables.tf variable "access_key" { default = "XXXXXXXXX" } variable "secret_key" { default = "XXXXXXXXXXXXXXXXXXXX" } variable "region" { default = "us-east-1" } variable "availabilityZone" { default = "us-east-1a" } variable "instanceTenancy" { default = "default" } variable "dnsSupport" { default = true } variable "dnsHostNames" { default = true } variable "vpcCIDRblock" { default = "172.16.0.0/16" } variable "subnetCIDRblock" { default = "172.16.1.0/24" } variable "destinationCIDRblock" { default = "0.0.0.0/0" } variable "ingressCIDRblock" { type = list default = [ "0.0.0.0/0" ] } variable "egressCIDRblock" { type = list default = [ "0.0.0.0/0" ] } variable "mapPublicIP" { default = true } # end of variables.tf
  • After creating both the files now initialize terraform project and apply using the below commands
# terraform init

# terraform apply 

  • Once the process will be completed, You will get VPC in the Dashboard


Wednesday, May 5, 2021

Create S3 Bucket in AWS using Ansible

 Description: Here I have explained, How to Create Bucket in AWS using Ansible 

Create IAM user from AWS:

  • IAM user need to Authorize Ansible Playbook to manage the S3 bucket
  • Open IAM console from AWS and navigate to IAM service 

  • Give S3 Full Access to IAM created user

  • Once the user created, download the user detail .csv file which contains Access Key and Secret ID

Install Require Ansible Packages in Ansible server
  • boto
    # pip install boto
  • boto3
    # pip install boto3
  • python version >= 2.6
    # yum install python 
Create ssh key for localhost to authorize

# ssh-keygen

  • Copy generated ssh key to authorization keys 
    # vi /root/.ssh/authorized_keys


Prepare Playbook to create S3 Bucket with Name "techblogalbucket" in "us-east-1" Region and with Public access

# vi Create_Bucket.yml --- - hosts: localhost tasks: - name: Create an S3 bucket become: true aws_s3: aws_access_key=XXXXXXXX aws_secret_key=XXXXXXXXXXXXXXX bucket=techblogalbucket mode=create permission=public-read region=us-east-1

  • Run yml file using ansible-playbook command
# ansible-playbook s3_create.yml

  • After successfully run yaml file verify S3 bucket in AWS console 


Monday, December 28, 2020

How to create EC2 instance using Ansible

Description: Here I have explained, How to create EC2 instance using Ansible

Create an IAM user  from AWS console

  • Open AWS console and navigate to IAM service 


Install Require Packages on Ansible Machine

  • Once User created successfully install below require things on Ansible machine 
Ansible

# yum install ansible -y

Python 

# yum install python python-devel python-pip

Boto [Boto is the python package which provides the interface to AWS] install using pip

#  pip install boto


Create Ansible Playbook to Create EC2 instance

  • Add localhost in ansible host file for creating the connection to AWS console

[webserver]
localhost


  • Create SSH Key for localhost and copy to authorization
    # ssh-keygen -t rsa


  • Once key file created copy to authorized_keys and paste as follow
  • Create a playbook for EC2 instance and paste below content in yaml file 
# vi ec2.yaml

---
  - name: Launching the AMS instance
    hosts: localhost
    tasks:
      - name: Launching the AMS instance
        ec2:
          key_name: ansible
          region: us-east-1
  instance_type: t2.micro
  image: ami-0c582118883b46f4f
          group: Ansible
  count: 2
  aws_access_key: WODAJGU3OHZ7RDKG4TPQ
  aws_secret_key: OD3I2mgh/ynyrJJ9Y/bLQto6JLII3gyGBFYJ+w7

  Description of Playbook 
    key_name: Key created in EC2  instance 
    region: The region on which you want to create new instance
    instance_type : Instance Type in EC2 
    image: Image you want to use to create new instance, You can get image id from EC2 launch                             console

   group: Security Group name which you want to place for VM
   count : Number of EC2 instance which you want to create
   aws_access_key: Access key from the user IAM user which we have created on beginning
   aws_secret_key: Secret key from the IAM user 

  • You will get both the keys from AWS IAM console

  • Test playbook content using below command 
# ansible-playbook -C ec2.yaml


  • Once result showing OK you can run playbook using the ansible-playbook command 

# ansible-playbook ec2.yaml


  • You can see new 2 new instances on the list as follow



Tuesday, December 8, 2020

Sync Up EC2 instance With S3 bucket

 Description: Here I have explained how to sync EC2 instance with S3 bucket 

Create an IAM User:  First I am creating IAM user for authentication and authorization


  • Select the option "Attach existing policy" and search for "AdministratorAccess" and "AmazonEC2FullAccess" 
  • Review policy and click on Create User 
  • Download .csv file and save in the local path 

  • Create EC2 instance and log on to the machine once the instance is up
  • Run aws configure command to configure user details where it will ask AccessKey and SecretKey. It will ask some information from CSV file which download under Create IAM user as follow 


  • Now I am installing HTTP on server using below command
# yum install httpd



  • Create default.php file in /var/www/html 
Create S3 Bucket 
  • To create S3 Bucket open Amazon S3 console and choose "Create Bucket" 
  • In "Create a Bucket" type a bucket name in the Bucket Name field



  • Create an empty bucket and looks like as follow

Sync EC2 instance with S3 bucket 
  • To sync EC2 content to S3 bucket access  ssh console and run below command
# aws s3 sync /var/www/html/ s3://testsynch/datasync

Description of the above command

aws s3 sync                   -    To sync
/var/www/html/             -    Path where our actual php file is placed in EC2
s3://testsynch/datasync  -    Path where to Sync in S3 bucket



  • Now you can see default.php under S3 bucket 




Saturday, October 17, 2020

AWS Code Deploy using GitHub

 Description: Here I have explain, What is AWS code deploy and How to implement it with GitHub?

What is AWS Code Deploy?

It is service that automates the code deployments to any instance like EC2 instance or Instance which running on-premise. Helps rapidly release new features and avoid downtime during deployment.

Architect 



Prerequisites:
  1. GitHub Repository with web application [Here In my case harpal1990/AppTest]
  2. AWS account

IAM Roles : Two Roles require for AWS code deploy one for code deploy and another for EC2 instance. 
  • To create role open AWS console, navigate to IAM -- Roles -- Create Role 
  • Create the following IAM roles and attached the policies 
Role1:
Name : EC2Role
Permission : AmazonEC2FullAccess, AmazonS3FullAccess
[Allow EC2 instance and AmazonS3 access]

Role2:
Name :  CodeDeployRole
Permission: AWSCodeDeployRole, AmazonS3FullAccess

  • To create IAM role navigate to IAM -- Roles  -- Create role  once you click on  create role select EC2 Service 
  • Here I have selected 2 Permission to allow access S3 bucket are as follow and save with EC2Role name and click on create 
AmazonEC2FullAccess, AmazonS3FullAccess


  • Now create another role with Name CodeDeployServieRole for Code Deploy and assigned "AWSCodeDeployRole"
AWSCodeDeployRole, AmazonS3FullAccess


  • Once you create Role you need to edit policy and modify relationship
  • Update Service by "codedeploy.region.amazonaws.com" change region 
  • Attach IAM role  CodeDeployInstanceRole with EC2 instance 
Prepare EC2 instance with Code Deploy Agent
  • For create instance open AWS console and navigate to EC2 service. Create Amazon Linux 2 AMI 


  • Create EC2 instance with pre install package like ruby, python, aws-cli 
#!/bin/bash
sudo yum -y update
sudo yum -y install ruby
sudo yum -y install httpd
service httpd start
sudo yum -y install wget
cd /home/ec2-user
wget https://aws-codedeploy-ap-south-1.s3.ap-south-1.amazonaws.com/latest/install
sudo chmod +x ./install
sudo ./install auto
sudo yum install -y python-pip
sudo pip install awscli

  • Allow port 80 in security group for apache
  • Assign common tag to EC2 instance, Here I have assigned tag Name and value "CodeDeploy"
  • Review and launch EC2 instance, It will take some time to create.

Create Application 
  • Before create application I am going to create one directory called script on GitHub repository and put service start and stop script on same 

  • To create application open AWS console and Navigate to Developer Tools -- Code Deploy -- Applications 

  • Once you click on Create application, you will ask for Application Name and Compute Platform. You have 3 options for compute platform [EC2/On-premises, AWS Lambda and Amazon ECS] for this tutorial I am using EC2/On-premises

  • Create Deployment Group  fill all require details for deployment like Name, Service role create for CodeDeployment 



  • Give tag which I have assigned to EC2 instance, Here I have given Name as Tag and CodeDeploy as a value. So you can see 2 unique matches found



  • Currently no load balancer is required so I haven't use it, At last click on create deployment group


  • Create Pipeline for same application project

  • Once you click on Next it ask to connect source stage, Here In this demo I have used GitHub as repository. Once you click on GitHub it ask for credentials and then repository name


  • The next step to select build provider, Here I am skip it you can choose Jenkins or AWS codebuild
  • At last click on review and create. 
  • Once it create successfully It will upload appspec.yml file to repository. Now modify appspec.yml as per your requirement

  • Code Deploy Stage to deploy application. Fill all the require details

  • Review all settings and click on Create Pipeline. It will take some time to create
  • Once Pipeline created Create Deployment and run pipeline. Once pipeline run successfully it will upload content to all EC2 instance location define in appspec.yml file.
  • To verify you nee to browse one of EC2 instance IP address it shows index page as follow
  • Now try to modify on Index page and re run pipeline you will get changes.