Deploying Flask Application on AWS EC2 using Load Balancer (ELB) and Auto Scaling Group (ASG)

Alejandro Cora González
8 min readJan 26, 2020

--

Sometimes a serverless approach using Lambda Functions won’t be the best solution for our project necessities. In the case that you would need to create your own EC2 instances a recommended solution is to use a Load Balancer (ELB) and an Auto Scaling Group (ASG) to respond effectively to increasing/decreasing workload.

In this demo project, we’ll learn:

  • How to create our Amazon Machine Image (AMI).
  • How to use Terraform to deploy our Infrastructure as code (IaC).
  • Create and deploy a Flask application using Gunicorn and Nginx.
  • How to use Elastic Load Balancer, Auto Scaling Group and EC2 instances for a classic architecture solution.

First of all, we need to create a security group that will be used on the EC2 instance creation.

# security_groups.tfresource "aws_security_group" "security_group_ec2_instances" {
name = "security-group-ec2-instances"
description = "Security group for EC2 instances..."

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

We’ll use Terraform to manage the resources needed. But, you could do it manually.

$ terraform init
$ terraform apply

After the security group was created, you need to create an EC2 instance.

Using “Amazon Linux 2 AMI”. An AMI is a template that contains the software configuration (operating system — applications) required to launch your instance.

We’ll use “t2.micro” as instance type because is free tier eligible and is enough for this project example.

The “Configure instance” and “Add Storage” steps could retain the default values. Also, you can add some tags to your instance.

In the “Configure Security Group” step, select the previous security group created. Also, create and download a new access key to connect to the EC2 instance via SSH.

When the instance is running is time to connect via SSH using the access key previously downloaded when the EC2 instance was created. Is required to change the permission level over the key file.

$ chmod 0400 /home/alejandro/Downloads/ec2-access-key.pem
$ ssh ec2-user@3.82.209.22 -i /home/alejandro/Downloads/ec2-access-key.pem

Well, in this moment we are ready to install and configure all resources within our EC2 instance. Following the next steps:

Install Python 3

$ sudo yum update  // --> First, perform the system update...
$ sudo yum install python3

Download the Flask application

You can download (clone) the example Flask application we provide here (the same of the code repository) or you can use your own one.

$ git clone https://github.com/alekglez/flask_app_deploy_ec2_elb_asg.git// Rename the folder...
$ mv flask_app_deploy_ec2_elb_asg flask-application

Install requirements

$ cd flask-application# Install and create virtual environment...
$ sudo pip3 install virtualenv
$ virtualenv --python=python3 .venv
$ source .venv/bin/activate
# Install requirements...
$ pip install -r requirements.txt

Install Nginx

Nginx is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. It accelerates content and application delivery, improves security, facilitates availability and scalability for the busiest web sites on the Internet.

$ sudo amazon-linux-extras install nginx1.12

Configure Gunicorn as system service

In this point, Gunicorn was installed previously by mean the requirements file, then you need to create a file gunicorn.service into the /etc/systemd/system directory.

$ sudo touch /etc/systemd/system/gunicorn.service
$ sudo nano /etc/systemd/system/gunicorn.service

The file must contain the following content:

#  /etc/systemd/system/gunicorn.service[Unit]
Description=gunicorn daemon for serve flask application...
After=network.target
[Service]
User=ec2-user
Group=nginx
WorkingDirectory=/home/ec2-user/flask-application
ExecStart=/home/ec2-user/flask-application/.venv/bin/gunicorn --workers 3 --bind unix:/home/ec2-user/flask-application/flask-application.sock wsgi:app
[Install]
WantedBy=multi-user.target

Then, you need to reload the system services and enable the new gunicorn service using:

$ sudo systemctl daemon-reload
$ sudo systemctl enable gunicorn
$ sudo systemctl start gunicorn

In order to check if the gunicorn service is running properly:

# Run this command...
$ sudo systemctl status gunicorn
# You should see something like that...
gunicorn.service - gunicorn daemon for serve flask application...
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: disabled)
Active: active (running) since sáb 2020-01-25 12:56:07 UTC; 9s ago
Main PID: 3914 (gunicorn)
CGroup: /system.slice/gunicorn.service
├─3914 /home/ec2-user/flask-application/.venv/bin/python3 /home/ec2-user/flask-application/.venv/bin/gunicorn --workers 3 --bind unix:/home/ec2-user/flask-application/flask-application.sock wsgi:app
├─3917 /home/ec2-user/flask-application/.venv/bin/python3 /home/ec2-user/flask-application/.venv/bin/gunicorn --workers 3 --bind unix:/home/ec2-user/flask-application/flask-application.sock wsgi:app
├─3918 /home/ec2-user/flask-application/.venv/bin/python3 /home/ec2-user/flask-application/.venv/bin/gunicorn --workers 3 --bind unix:/home/ec2-user/flask-application/flask-application.sock wsgi:app
└─3919 /home/ec2-user/flask-application/.venv/bin/python3 /home/ec2-user/flask-application/.venv/bin/gunicorn --workers 3 --bind unix:/home/ec2-user/flask-application/flask-application.sock wsgi:app

Configuring Nginx

$ sudo nano /etc/nginx/nginx.conf

Into /etc/nginx/nginx.conf, do the following modifications within the server code block:

server {
...
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/home/ec2-user/flask-application/flask-application.sock;
}

...
}

Then, add the nginx user to your group and give our user group execute permissions on our home directory. This will allow the Nginx process to enter and access content

$ sudo usermod -a -G ec2-user nginx
$ chmod 710 /home/ec2-user

We should test our Nginx configuration file in order to find syntax errors:

$ sudo nginx -t

Finally, enable and start the nginx service and test your application using the EC2 instance public IP:

$ sudo systemctl enable nginx
$ sudo systemctl start nginx
$ sudo systemctl status nginx

Oops, we forget to explain what our Flask application does. It’s simple, only return the hostname, IP address and application status.

Well, in this point your EC2 instance is ready and you must create an AMI using: Image→Create Image.

Once, the template image was created, you need the image ID, in this case is: “ami-08f3c04e8b4b32ce5”, it’s needed because it will be used in our resources within Terraform configuration files.

First, we need to configure the Auto Scaling Group (ASG), update the security groups (because the EC2 instances only will allow traffic from the Load Balancer) and the other related resources.

// Security group for EC2 instances...
resource "aws_security_group" "security_group_ec2_instances" {
name = "security-group-ec2-instances"
description = "Security group for EC2 instances..."

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = [aws_security_group.security_group_elb.id]
}

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.security_group_elb.id]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
// Security group for Load Balancer...
resource "aws_security_group" "security_group_elb" {
name = "security-group-elb"
description = "Security group for Application Load Balancer..."

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
// AMI template image...
resource "aws_launch_template" "image_flask_app_ec2_instance" {
name_prefix = "flask-app-ec2-instance"
image_id = "ami-08f3c04e8b4b32ce5" // <-- Replace the Image ID!
instance_type = "t2.micro"
}
// Target group...
resource "aws_alb_target_group" "flask_app_ec2_target_group" {
name = "flask-app-ec2-target-group"
vpc_id = data.aws_vpc.default.id
protocol = "HTTP"
port = 80
}
// Auto Scaling Group...
resource "aws_autoscaling_group" "flask_app_autoscaling_group" {
name = "flask_app_autoscaling_group"
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
target_group_arns = [aws_alb_target_group.flask_app_ec2_target_group.arn]

default_cooldown = 60
health_check_grace_period = 60
health_check_type = "ELB"

desired_capacity = 1

force_delete = true
max_size = 5
min_size = 1

launch_template {
id = aws_launch_template.image_flask_app_ec2_instance.id
version = "$Latest"
}

tag {
key = "asg"
value = "flask_app_autoscaling_group"
propagate_at_launch = true
}
}
// Example policy for Auto Scaling Group...
resource "aws_autoscaling_policy" "autoscaling_policy_by_requests" {
name = "autoscaling-policy-by-requests"
autoscaling_group_name = aws_autoscaling_group.flask_app_autoscaling_group.name
policy_type = "TargetTrackingScaling"

target_tracking_configuration {
target_value = 4000

predefined_metric_specification {
predefined_metric_type = "ASGAverageNetworkOut"
}
}
}

Then, define the Load Balancer and the other related resources.

provider "aws" {
region = "us-east-1"
}
// Default VPC...
data "aws_vpc" "default" {
default = true
}
// Default subnets...
data "aws_subnet_ids" "all" {
vpc_id = data.aws_vpc.default.id
}
// Default security group...
data "aws_security_group" "default" {
vpc_id = data.aws_vpc.default.id
name = "default"
}

data "aws_elb_service_account" "main" {}
// Bucket to save the logs...
resource "aws_s3_bucket" "lb_flask_app_access_logs" {
bucket = "lb-flask-app-access-logs"
force_destroy = true
acl = "private"

policy = <<POLICY
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::lb-flask-app-access-logs/*",
"Principal": {
"AWS": [
"${data.aws_elb_service_account.main.arn}"
]
}
}
]
}
POLICY
}
// Application Load Balancer...
resource "aws_lb" "flask_app_load_balancer" {
name = "flask-app-load-balancer"
load_balancer_type = "application"

subnets = data.aws_subnet_ids.all.ids
security_groups = [aws_security_group.security_group_elb.id]

enable_cross_zone_load_balancing = true
enable_http2 = true
internal = false

access_logs {
bucket = aws_s3_bucket.lb_flask_app_access_logs.bucket
prefix = "lb-flask-app"
enabled = true
}

tags = {
Environment = "production"
}
}
// Listeners...
resource "aws_lb_listener" "lb_port_80_listener" {
load_balancer_arn = aws_lb.flask_app_load_balancer.arn
protocol = "HTTP"
port = 80

default_action {
target_group_arn = aws_alb_target_group.flask_app_ec2_target_group.arn
type = "forward"
}
}

Our Auto Scaling Group was configured for autoscale depending the network traffic. For increase the application inbound/outbound traffic you could use Locust.

// In your local machine (into your project folder)...
$ locust --port 8090
// Then access through:
http://localhost:8090

When the limit (target_value = 4000) defined in the autoscaling policy is over passed an alarm will be raised and the Auto Scaling Group will create more EC2 instances.

After a while, when the traffic is slower, the Auto Scaling Group will terminate the previously created EC2 instances.

Don’t forget delete your cloud resources…

$ terraform destroy

Well, that’s all. I hope this information will be useful in your next project using EC2 instances on Amazon Web Services!!!

Code repository: https://github.com/alekglez/flask_app_deploy_ec2_elb_asg.git

Bye…

--

--

Alejandro Cora González
Alejandro Cora González

Written by Alejandro Cora González

Solutions Architect | Python Developer | Serverless Advocate

No responses yet