Developing AWS reliant services locally

6 min read

Introduction

Today I would like to present LocalStack and how it can be used during local development and testing without the need to use real AWS services.


What is LocalStack?

LocalStack is an open-source tool that provides mimicked AWS services. It can be run locally during the development and testing. It allows to use AWS cloud stack locally and eliminates the need for using real AWS services while developing or testing the application. This allow developers to freely experiment with their applications without a worry about additional AWS usage costs. Detailed information about LocalStack can be found here.


Goal of the article

The goal of this project is to create simple FastAPI application, which will be responsible for sending e-mails via SES using user input. Full project can be found here.


Technological stack

Technological stack used in this project:

  • Python 3.12.1
    • pydantic - data validation
    • boto3 - AWS SDK
    • mypy-boto3-ses - Types for AWS SES client
    • FastAPI - REST framework
    • uvicorn - ASGI web server implementation
  • LocalStack 2.31

Installing the dependencies

Firstly, we need to create our virtual environment. We can do that by typing in the terminal:

python -m venv .venv

Now we have to activate the virtual environment. We can do that by running one of the commands below.

On macOS or UNIX:

source .venv/bin/activate

On Windows:

.venv\Scripts\activate.bat

Now that our virtual environment is activated we can install packages that are required for this project. We can do that by creating requirements.txt file containing packages below.

fastapi==0.86.0
pydantic==1.10.2
mypy-boto3-ses==1.34.0
boto3==1.34.19
uvicorn==0.25.0

Now we can install required dependencies using our requirements.txt file.

pip install -r requirements.txt

Now we have everything prepared for development. Let’s get to it!


Preparing the service

First step is to init AWS SES client. We will create file called aws.py and inside initialise new SES client. We will create separate variable localstack_url, which will determine if we are running application inside Docker or if we do it locally, because it makes a difference in LocalStack connection url. This variable will be added to our docker compose later. Let’s create aws.py file and put code below there.

import boto3
import os

from mypy_boto3_ses.client import SESClient


ENDPOINT_URL = (
    "http://localstack:4566" if os.getenv("DOCKER") else "http://127.0.0.1:4566"
)

client: SESClient = boto3.client(
    service_name="ses",
    region_name="eu-west-1",
    endpoint_url=ENDPOINT_URL,
    aws_access_key_id="dummy",
    aws_secret_access_key="dummy",
)

We have to create method that will be responsible for actually sending e-mails using AWS SES. Let’s add this code to aws.py file.

from mypy_boto3_ses.type_defs import SendEmailResponseTypeDef
from .schemas import EmailSchema


def send_email(email: EmailSchema) -> SendEmailResponseTypeDef:
    """Send email using AWS SES.

    Args:
        email (EmailSchema): Email to be sent.

    Returns:
        SendEmailResponseTypeDef: AWS SES email response.
    """

    return client.send_email(
        Source="test@test.test",
        Destination={"ToAddresses": [email.address]},
        Message={
            "Subject": {"Data": email.subject, "Charset": "string"},
            "Body": {"Text": {"Data": email.message, "Charset": "string"}},
        },
    )

We also need the schema for user e-mail parameters input. We will put it in schemas.py file.

from pydantic import BaseModel


class EmailSchema(BaseModel):
    """Pydantic model schema for email data validation.

    Attributes:
        address (str): The recipient's email address
        subject (str): The subject line of the email
        message (str): The main body content of the email
    """

    address: str
    subject: str
    message: str

Next, we have to create endpoint that will be responsible for sending e-mails based on user input. Let’s put it in main.py file.

from fastapi import FastAPI

from .aws import send_email
from .schemas import EmailSchema

app = FastAPI()


@app.get("/api/v1/ping", status_code=200)
async def ping():
    """Liveness check endpoint."""

    return {"message": "Email service is running"}


@app.post("/api/v1/email/send", status_code=201)
async def email_route(email: EmailSchema):
    """Send email endpoint.

    Args:
        email (EmailSchema): Email to be sent.
    """

    response = send_email(email)

    return {
        "message": "Email sent successfully",
        "message_id": response["MessageId"],
    }

Now it is possible to run the FastAPI application using command below while being in the root directory.

uvicorn src.main:app --port 80 --reload

Next step would be to containerise our application.

FROM python:3.12.1-slim-bookworm

WORKDIR /app

COPY ./requirements.txt /app/requirements.txt

RUN pip install --no-cache-dir -r /app/requirements.txt

COPY ./src /app/src

CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "80"]

Now the first part is done, but our application won’t work without LocalStack. Let’s fix that!


Spinning up LocalStack

As previously mentioned LocalStack exists in a form of Docker container. By default, LocalStack does not create any resources by itself when spinning up. In order to create AWS services that we want, its necessary to do it by hand when service has spun up or to create an init script, that will create required resources when LocalStack will be starting. I will present how to create init script.

Create init script

Firstly, script file must be created. Name of this file does not matter, only extension of this file must be .sh. I named this file init.sh.

Next step is to create AWS profile. This profile will be created inside LocalStack and it will be used to create resources. The same profile can be created on the local machine and added to awscli configuration, which allows sending requests using awscli from the local machine to the LocalStack container.

echo "########### Creating profile ###########"

aws configure set aws_access_key_id "dummy" --profile test-profile
aws configure set aws_secret_access_key "dummy" --profile test-profile
aws configure set region "eu-west-1" --profile test-profile
aws configure set output "table" --profile test-profile

Lastly we must verify identity from which e-mails will be sent.

echo "########### Verifying identity ###########"

aws ses verify-email-identity \
    --endpoint-url=http://localhost:4566 \
    --region eu-west-1 \
    --profile test-profile \
    --email-address test@test.test

Init script is ready, let’s spin everything up!


Docker compose

Script is ready and service is created. Now it is necessary to create Docker Compose file containing setup for LocalStack and the service.

version: '3.2'

services:
  localstack:
    image: localstack/localstack:2.3.1
    ports:
      - '4566-4597:4566-4597'
    environment:
      - LOCALSTACK_HOST=localstack
      - SERVICES=ses
      - AWS_ACCESS_KEY_ID=dummy
      - AWS_SECRET_ACCESS_KEY=dummy
      - AWS_DEFAULT_REGION=eu-west-1
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "../resources/init.sh:/etc/localstack/init/ready.d/init.sh"
      - "/var/run/docker.sock:/var/run/docker.sock"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:4566/_localstack/health"]
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 10s

  ses:
    build:
      context: ../
      dockerfile: docker/Dockerfile
    ports:
      - 80:80
    environment:
      - DOCKER=True
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:80/api/v1/ping"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 45s
    depends_on:
      localstack:
        condition: service_healthy

Running the service

Now that everything is ready we can try it!

Firstly we need to run our docker compose. We can do that by typing when being in root directory in terminal:

docker compose -f ./docker/compose.yaml up -d --build

When our services spin up, we can check all e-mails sent by our service using http://localhost:4566/_localstack/ses/ url. It should look exactly like shown on image below if we haven’t sent any e-mail yet.

{
  "messages": []
}

We can now test it by sending a POST request to http://localhost:80/api/v1/email/send with request body:

{
  "address": "me@mydomain.com",
  "subject": "Hello",
  "message": "How are you?"
}

Response should be like below with exception that message_id will be different every time as it is unique identifier of an AWS SES message.

{
  "message": "Email sent successfully",
  "message_id": "klivnjymbiwvtita-spyvyfrj-eofl-qirf-fdlg-fvbhulaxbygt-dzzacx"
}

We can check if our e-mail was successfully sent by visiting again http://localhost:4566/_localstack/ses/. We should see our sent e-mail and additional details about it.

{
  "messages": [
    {
      "Id": "klivnjymbiwvtita-spyvyfrj-eofl-qirf-fdlg-fvbhulaxbygt-dzzacx",
      "Region": "eu-west-1",
      "Destination": {
        "ToAddresses": ["me@mydomain.com"]
      },
      "Source": "test@test.test",
      "Subject": "Hello",
      "Body": {
        "text_part": "How are you?",
        "html_part": null
      },
      "Timestamp": "2024-01-14T16:56:56"
    }
  ]
}

That’s it! We can now use AWS SES locally and write additional code that will make use of it.