Infrastructure on AWS (using CloudFront,EC2,EFS,S3) by Terraform

Sandeshjain
7 min readSep 5, 2020

Namaste to all my dear visitors!!

In this article our main emphasis is about EFS(Elastic File Service-Service by amazon):

We will learn about the EFS concepts using a task!!

TASK
2

  1. Create Security group which allow the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
  4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html.
  5. Developer have uploaded the code into Github repo also the repo has some images.
  6. Copy the github repo code into /var/www/html.
  7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  8. Create a CloudFront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

To know about AWS EBS :Click

We will accomplish the EFS task step-by-step!

EFS

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.EBS is also used for storage but EBS can only be attached to a single OS.Suppose we have 100 OS then we would have to attach,mount on every different OS.To overcome this and get the benefit of scalability,easy-to-use service AWS EFS can be used.In EFS, allows you to mount the file system across multiple regions and instances i.e mount the EFS on the directory,etc. on as many different OS we want.

Prerequisite:

AWS Account,AWS cli,Terraform ,IAM Role in AWS to login using those credentials

Download aws cli & add it to the env variables.

We will run the code using terraform which is an open-source infrastructure as code software tool created by HashiCorp.Defined in the HCL(HashiCorp Configuration Language)

In Command Prompt

Creating a private key

resource "tls_private_key" "site_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "local_file" "private_key" {
content = tls_private_key.site_key.private_key_pem
filename = "my_key.pem"
file_permission = 0400
}
resource "aws_key_pair" "site_key" {
key_name = "mykey"
public_key = tls_private_key.site_key.public_key_openssh
}

Output in AWS

Creating a Security Group with port 22(for ssh),80(tcp protocol),2049(NFS Protocol)

resource "aws_security_group" "firewall_security" {
name = "secured"
description = "https, ssh Protocols"
ingress {
description = "http-permit"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "ssh-permit"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "secured"
}
}

NFS Protocol: This works as Distributed OS works,client/server program that lets a computer use the store and update files on a remote computer as though they were on the user’s own computer

Creating a instance in AWS & provisioning the OS with httpd(for web server),git(for cloning code in github),amazon-efs-utils(For efs)

resource "aws_instance" "myin" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = aws_key_pair.site_key.key_name
security_groups = [ "secured" ]
depends_on = [
aws_key_pair.site_key,
aws_security_group.firewall_security,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.site_key.private_key_pem
host = aws_instance.myin.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
"sudo yum install amazon-efs-utils -y" ,
]
}
tags = {
Name = "Divergent"
}
}

So our OS named Divergent is created with httpd service enabled,& the required packages installed

Creating Volume using the EFS service

resource "aws_efs_file_system" "efs" {
depends_on = [
aws_instance.myin,
]
creation_token = "efs"
tags = {
Name = "efs"
}
}
output "efs" {
value = aws_efs_file_system.efs
}

Attach EFS in our OS in the subnet in vpc, then mount that volume into /var/www/html.

resource "aws_efs_mount_target" "mount-efs" {
depends_on = [
aws_efs_file_system.efs,
]
file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_instance.myin.subnet_id
security_groups = ["${aws_security_group.firewall_security.id}"]
}

Here we have mounted the EFS in the OS we created and this EFS is mounted in OS keeping alomg with the security group we created for NFS.

Mounting EFS on /var/www/html and we are cloning the code we have written in our Github Repositry

resource "null_resource" "nullremote3"  {
depends_on = [
aws_efs_file_system.efs,
aws_efs_mount_target.mount-efs
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.site_key.private_key_pem
host = aws_instance.myin.public_ip
}
provisioner "remote-exec" {
inline = [
"efs_id=${aws_efs_file_system.efs.id} " ,
"sudo mount -t efs $efs_id:/ /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/sandesh2000/multicloud-formation.git /var/www/html/"
]
}
}

Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

Amazon S3 (Simple storage Service) or Object-As-a-Service,Highly Secure & Durable Cloud Storage,it is the object storage where images,files,videos,softwares can be stored.Amazon provides 11–9’s Guarantee that the data stored in the S3 is completely safe.In S3,store the things which do not require editing,as we may have to download long TB files to edit even the smallest part in that file,etc.

resource "aws_s3_bucket" "jainsandesh2704" {
bucket= "jainsandesh2704"
acl = "public-read"
force_destroy=true
tags = {
Name="databucket"
}
}
resource "aws_s3_bucket_object" "object1" {
depends_on = [
aws_s3_bucket.jainsandesh2704,
]
key = "COVID_IMAGE"
bucket = aws_s3_bucket.jainsandesh2704.bucket
acl = "public-read"
source="C:\\Program Files\\Desktop\\covid.jpg"
etag = filemd5("C:\\Program Files\\Desktop\\covid.jpg")
}
locals {
s3_origin_id = "myS3"
}

Note: Always give a unique name(trick-some complex name),as its name must be unique from all other buckets in AWS Datacentres.

We have provided the public access to the object (image),uploaded from our local system in the S3 by acl = “public-read”.

Create a CloudFront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

AWS Cloudfront Distribution is a web service that speeds up in delivering the content to the user.AWS has many edge locations,when first time any particular data is requested then the copy of this data is made and stored in the nearest edge location of that user.So next time if any user demands for the same data,then this latency is reduced to the lowest point,improving the user-experience by providing the data from that edge location.

After the Cloudfront is successfully build,the Cloudfront URL is updated in the code in /var/www/html in our OS by using the following command as also used above.This will add the URL so the s3 object can be used using the cloudfront distribution.

http://${self.domain_name}/${aws_s3_bucket_object.object1.key}');</style>\" >> /var/www/html/covid.html

Now we can add some optional things which will display the all the things we want to print and will help to conclude our output.

output  "instance_ip" {
value = aws_instance.myin.public_ip
}
resource "null_resource" "null_local" {depends_on = [
aws_cloudfront_distribution.my_s3_distribution,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.myin.public_ip}/covid.html"
}
}

This was the explanation of the code write all this code in a file with .tf extension to run this terraform file.

All this was a one time investment,now we can use this code as many times we want and it will deploy the full website,etc in just few commands.

terraform init-This will install all the backend plugins required for the provider we use(here we are using AWS)

terraform apply — auto-approve This will run the full code,will do all the prechecks and deploy the full setup in just a one command

Once everything will be checked,established then our site will deploy completely ,here we have used start chrome URL so it will start our site automatically in the browser.

terraform destroy — auto-approve To completely destroy the setup in one command we can use this command.

Our complete setup has been destroyed from AWS.So now we can use this code as many times we want and share this with our friends too,and run them with just few commands.

Thank you everyone to spend your precious time to grasp something out of this article!!

For any query feel free to bother me:) >> Sandesh Jain

--

--