Deploying webserver on AWS EC-2 integrating with EFS,S3,Cloudfront automated with Terraform

Anurag Mittal
11 min readDec 30, 2020

Problem Statement
1. Create the key and Create Security group which allow the port 80.
2. Launch EC2 instance.
3. In this EC2 instance use the existing key or provided key and security group which we have created in step 1.
4. Launch one Volume using the EFS service and attach it in your VPC, then mount that volume into “/var/www/html” folder.
5. Developer have uploaded the code into GitHub repo also the repo has some images.
6. Copy the GitHub repo code into “/var/www/html”.
7. Create S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in “/var/www/html”.

Ebs (block storage)

As we know if the data stored in EBS then if os terminates then we lost all the data. Although we have a persistent block storage that act as pendrive . but there is certain limitations for this persistent EBS. Limitations are as follows-
1. We can store maximum upto 16 tb in this
2. If you launch this storage in some region like mumbai 1-a ond your os is in mumbai I-b than you can’t use this pendrive . because it is regional or region based.
3. You have to inject this pendrive in os then create partition and format it. if by chance harddisk or pendrive corrupt you will loose all the data.

4. At a time you can inject this hard disk in a single instance.

Suppose a situation in which developer write a code and updated in a EBS that is connected to one instance but it forget to update the code in other instance that is connected to another EBS having same website running on first instance then in this case client will see two different versions of a site which is not good.

Why we use s3 (object storage) because —

1. in this we can store the as much as data we want . there is no fixed limit.

2. It is global service . it means if you create storage s3 in suppose north Verginia — 1a and you want to inject this in instance running on Mumbai -1c , you can do this.
3. You don’t need to inject the s3 in instance just gave the url produced by s3 . don’t need to format and create partition because our need is simple just want to store the data so that client can download or use it.
4. At a time you can provide s3 storage to any number of instances there is no limit

Limitations of S3(object storage)-
It don’t have file system

File system-
Suppose you have a device with Ram 1 GB and you want to see the movie having size 4 GB . Now in this case it might look that you are not able to watch movie due to less ram but what happen is a small part movie that you are seeing currently load on the device ram after that it remove then another part will load on ram and this process continues. That is known as file system.
From S3 , client can upload or download the file . in s3 bucket but there is no option to edit the file. For edit the file client first have to download the whole file from s3 bucket on local system after this they can edit in their local system because on local system we have file system.

EFS Elastic File system

In this we have a centralized storage in which we can store the data or code. we can connect this centralized storage to multiple instances via NFS protocol. when developer do some changes in a code that is stored in this storage then from ec2 instance we can retrieve the changed code from this centralized storage that is not possible in S3.

This centralized storage act as NFS server and ec2- instance is NFS client , connecting to this server for the storage via NFS protocol.

The main differences between EBS and EFS is that EBS is only accessible from a single EC2 instance in your particular AWS region, while EFS allows you to mount the file system across multiple regions and instances.

Note- Amazon provide Block storage(EBS) over the network using ISCSI or Fibre Chanel protocol
Amazon provide object storage(S3) over the network using http or https protocol.

Cloudfront

Suppose a startup is running. This startup is having a website on some webserver.

This website is running in mumbai also s3 is in mumbai. Suppose client are in US if they want to access the images in s3 bucket ,they can but due to this long distance between client and s3 where it is running , client will see some latency.

First client network packet go to mumbai and from here it again go to US this cause latency.

What we want is to deliver the content to the client . This service is known as CDN- Content Delivery Network.

To reduce or minimize the latency what aws has done they created a content delivery network.

Let me explain in detail…..

1. AWS has created some edge locations or small data centers. When there is a requirement comes from client side that they want the content from the s3 bucket that is running remotely somewhere. CDN will connect the client to their nearest edge location from this edge location they can get the content , there is no need for client to go to s3 region where it is running.

In AWS CDN service is provided by CloudFront.

Edge location is just meant to deliver the content.

2. Client first go to edge location than edge location check if the file is in there system than it provide to client , if not than edge location will go to s3 on behalf of client get the image or file from s3. First edge location copy this file in there database after they provide to client.

Copy or local copy — -> cache

TTL- Time to Leave

This local copy or cache in edge location will remain only for limited period of time by default it is 1 day.

If client find the local copy or cache in edge location — → hit

If not — -> miss in this case edge location forward the request .

From wherever edge location getting the content that is known as origin.

1. When we copy the content from s3 to edge location aws provide a unique url from which client can connect to nearest edge location to getting the content.

AWS has one service named as fault tolerance — -

If client from this url connect to 1st nearest edge location and they see tthat this first edge location is fail or down than automatically from this url client is redirected to 2nd nearest edge location.

Lets start the Project -

Prerequisites

  1. AWS account
  2. AWS CLI V2 in your host
  3. Terraform installed in your host

Step1 . — Creating one file for creating AWS key pair , Security group for instance ,Security Group for EFS with extension .tf

In this file we have to first pass who is your provider . Here provider is aws .

Security group for instance contain two ingress rule which allow incoming traffic on port 22 for ssh and on port 80 for http connection

provider "aws" {
region = "ap-south-1"
profile = "mittal"
}
data "aws_vpc" "default_vpc" {
default = true
}
data "aws_subnet_ids" "default_subnet" {
vpc_id = data.aws_vpc.default_vpc.id
}
// Creating RSA keyvariable "EC2_Key" {default="httpdserverkey"}
resource "tls_private_key" "httpdkey" {
algorithm = "RSA"
rsa_bits = 4096
}
// Creating AWS key-pairresource "aws_key_pair" "generated_key" {
key_name = var.EC2_Key
public_key = tls_private_key.httpdkey.public_key_openssh
}
// Creating security group for Instanceresource "aws_security_group" "httpd_security" {depends_on = [
aws_key_pair.generated_key,
]
name = "httpd-security"
description = "allow ssh and httpd"
vpc_id = data.aws_vpc.default_vpc.id

ingress {
description = "SSH Port"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPD Port"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "httpdsecurity"
}
}
// Creating Security group for EFSresource "aws_security_group" "efs_sg" {
depends_on = [
aws_security_group.httpd_security,
]
name = "httpd-efs-sg"
description = "Security group for efs storage"
vpc_id = data.aws_vpc.default_vpc.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
security_groups = [aws_security_group.httpd_security.id]
}
}

Step -2 . Create a EFS Cluster

// Creating EFS cluster

resource "aws_efs_file_system" "httpd_efs" {
depends_on = [
aws_security_group.efs_sg
]
creation_token = "efs"
tags = {
Name = "httpdstorage"
}
}
resource "aws_efs_mount_target" "efs_mount" {
depends_on = [
aws_efs_file_system.httpd_efs
]
for_each = data.aws_subnet_ids.default_subnet.ids
file_system_id = aws_efs_file_system.httpd_efs.id
subnet_id = each.value
security_groups = ["${aws_security_group.efs_sg.id}"]
}
  • Here I have used “efs resource” to provision EFS storage on AWS.
  • Next after provisioning done. allocate them to the specific subnets where this cluster will work. That’s why we used “efs mount target” and mentioned to pick all the subnets under default vpc.
  • Also in security group we mentioned to pick that security group which we created previously for EFS cluster.

Step -3. Launching S3 and Cloud Front :

// Creating S3 bucket.resource "aws_s3_bucket" "httpds3" {
bucket = "rg-web-bucket"
acl = "public-read"
}
//Putting Objects in S3 Bucketresource "aws_s3_bucket_object" "s3_object" {
bucket = aws_s3_bucket.httpds3.bucket
key = "am3.jpeg"
source = "C:/Users/ANURAG MITTAL/Desktop/am3.jpeg"
acl = "public-read"
}
// Creating Cloud Front Distribution.locals {
s3_origin_id = aws_s3_bucket.httpds3.id
}
resource "aws_cloudfront_distribution" "CloudFrontAccess" {depends_on = [
aws_s3_bucket_object.s3_object,
]
origin {
domain_name = aws_s3_bucket.httpds3.bucket_regional_domain_name
origin_id = local.s3_origin_id
}
enabled = true
is_ipv6_enabled = true
comment = "s3bucket-access"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "blacklist"
locations = ["CA"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
retain_on_delete = true
}

Step -4 . Launching EC2-Instance :

  • I used Amazon Linux 2 AMI and some details like instance type, key pair, security group, tags etc.
  • Using remote executor install required software like httpd, git and then started the service of httpd.
  • installation of “amazon-efs-utils” as this software is needed to communicate with EFS cluster. using mount command mounting the EFS cluster in the “/var/www/html” folder in EC2 instance.
  • copied the GitHub html files and deployed on the webserver.
// creating the 1st EC2 Instance

resource "aws_instance" "HttpdInstance" {
depends_on = [
aws_efs_file_system.httpd_efs,
aws_efs_mount_target.efs_mount,
aws_cloudfront_distribution.CloudFrontAccess,
]
ami = "ami-0e306788ff2473ccb"
instance_type = "t2.micro"
key_name = var.EC2_Key
security_groups = [ "${aws_security_group.httpd_security.name}" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.httpdkey.private_key_pem
host = aws_instance.HttpdInstance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo yum install -y amazon-efs-utils",
"sudo mount -t efs -o tls ${aws_efs_file_system.httpd_efs.id}:/ /var/www/html",
"sudo git clone https://github.com/anurag08-git/hmc-task2.git /var/www/html",
"echo '<img src='https://${aws_cloudfront_distribution.CloudFrontAccess.domain_name}/am3.jpeg' width='390' height='480'>' | sudo tee -a /var/www/html/anurag.html",
]
}
tags = {
Name = "HttpdServer"
}
}

Step -5. Opening website on chrome

This code will only execute when aws_instance.HttpdInstance created successfully. It will open our website in chrome browser

// Finally opening the browser to that particular html sites to see how It's working.resource "null_resource" "ChromeOpen"  {
depends_on = [
aws_instance.HttpdInstance,
]
provisioner "local-exec" {
command = "chrome ${aws_instance.HttpdInstance.public_ip}/anurag.html "
}
}

Ok so start building the whole infrastructure by just one command

terraform init

This is used to download all the required plugins for the service providers and to initialize the directory.

terraform validate

It is used to check configuration we want to setup is valid is not.

terraform apply --auto-approve

It will setup the whole infrastructure.

S3 bucket ,AWS key pair , s3 bucket object , AWSsecurity group for instance , aws security group for EFS has been created successfully

EFS , Cloudfront has been created successfully

Amazon Linux Instance has been created by installing necessary dependencies and downloading code from Github successfully.

Hence, You can see local system has opened up browser and launched your webserver with necessary files

You can check from AWS web console that Cloudfront has been created successfully

EC2 -instance is running fine

In S3 bucket with image as object

EFS has been created successfully

Key pair has been created successfully

Security group for instance has been created successfully

Security group for EFS has been created successfully

terraform destroy  --auto-approve

To delete or destroy the whole infrastructure by Terraform

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Anurag Mittal
Anurag Mittal

Written by Anurag Mittal

Learning the new technologies like mlops, devops, hybrid multi cloud to enhance my skills.

No responses yet

Write a response