Building an ECS Cluster with Raspberry Pi 4 Nodes Using AWS ECS Anywhere
In this guide, we’ll explore how to set up an Amazon Elastic Container Service (ECS) cluster using three Raspberry Pi 4 devices as external instances, leveraging AWS ECS Anywhere. We’ll delve into the capabilities of ECS, provide the necessary Terraform code to create the cluster, and walk through the steps to generate activation keys. Additionally, we’ll develop a Python 3 application that listens to an Amazon Simple Queue Service (SQS) queue for batch processing messages. This process involves downloading an object from Amazon S3, processing it through a mocked API, and storing the result in a DynamoDB table. We’ll also cover the Terraform configurations required to set up the DynamoDB table, SQS queue, S3 event notifications, and an Amazon Elastic Container Registry (ECR) repository for our Docker images.

Considerations for AWS ECS Anywhere External Instances
Before integrating Raspberry Pi 4 devices into an ECS cluster using ECS Anywhere, it’s essential to understand the limitations and considerations associated with external instances:
- Networking Constraints: External instances are optimized for applications that generate outbound traffic or process data. However, they lack support for Elastic Load Balancing (ELB), making them less efficient for applications requiring inbound traffic, such as web services.
- Supported Operating Systems and Architectures: ECS Anywhere supports various operating systems and CPU architectures. For Raspberry Pi 4, which utilizes an ARM64 architecture, ensure you’re using a compatible operating system like Ubuntu 20.04 or later.
- IAM Role Requirements: External instances require an IAM role to communicate with AWS APIs. This role, typically named
ecsAnywhereRole
, must be associated with each external instance. - Feature Limitations: Certain ECS features aren’t supported with external instances, including:
- Service load balancing
- Service discovery
- The
awsvpc
network mode - Amazon EFS volumes
- AWS App Mesh integration
Understanding these limitations is crucial for designing and deploying applications effectively on ECS Anywhere.
IAM Roles for AWS API Interactions
To enable our ECS tasks to interact with AWS services like SQS and DynamoDB, we need to configure appropriate IAM roles:
- Amazon ECS Task Execution IAM Role: This role allows the ECS container agent to make AWS API calls on your behalf, such as pulling container images from Amazon ECR or sending logs to Amazon CloudWatch. It’s specified in your task definition.
- Amazon ECS Task Role: This role grants permissions to the application running inside the container to access AWS services. For instance, if your application needs to read from an SQS queue or write to a DynamoDB table, the necessary permissions should be attached to this role.
Properly setting up these roles ensures secure and efficient interactions between your ECS tasks and other AWS services.

Setting Up the ECS Cluster with Raspberry Pi 4 Nodes
To establish an ECS cluster with Raspberry Pi 4 devices as external instances, follow these steps:
1. Prepare the Raspberry Pi 4 Devices
Install a Supported Operating System: Use an ARM64-compatible OS like Ubuntu 20.04.
Update and Upgrade Packages:
sudo apt update && sudo apt upgrade -y
Install Docker:
curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh sudo usermod -aG docker $USER
Install AWS Systems Manager (SSM) Agent:
curl -o ecs-anywhere-install.sh https://amazon-ecs-agent.s3.amazonaws.com/ecs-anywhere-install.sh chmod +x ecs-anywhere-install.sh sudo ./ecs-anywhere-install.sh --region <AWS_REGION> --cluster rpi-ecs-cluster --activation-id <ACTIVATION_ID> --activation-code <ACTIVATION_CODE>
To verify that everything works you can run sudo systemctl status ecs
and sudo systemctl start ecs
.
2. Create the ECS Anywhere IAM Role
This role allows external instances to communicate with AWS APIs.
Define the Trust Policy:
Create a file named
ssm-trust-policy.json
with the following content:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": { "Service": "ssm.amazonaws.com" }, "Action": "sts:AssumeRole" } }
Create the IAM Role:
aws iam create-role --role-name ecsAnywhereRole --assume-role-policy-document file://ssm-trust-policy.json
Attach Necessary Policies:
aws iam attach-role-policy --role-name ecsAnywhereRole --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore aws iam attach-role-policy --role-name ecsAnywhereRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
These steps establish the ecsAnywhereRole
with the required permissions.
3. Generate Activation Keys
Activation keys register external instances with your ECS cluster.
Create an Activation:
aws ssm create-activation --default-instance-name "raspberry-pi" --iam-role ecsAnywhereRole --registration-limit 3
This command outputs an activation code and ID. You can use this activation by default for 30 days and for three instances, when it expires your instance won’t need any other activation.
Register Each Raspberry Pi:
On each device, run:
sudo amazon-ssm-agent -register -code "<ActivationCode>" -id "<ActivationId>" -region "<YourAWSRegion>"
Replace <ActivationCode>
, <ActivationId>
, and <YourAWSRegion>
with your specific values.
Setting Up AWS Resources with Terraform
1. Create an Amazon ECR Repository
resource "aws_ecr_repository" "ecs_repo" {
name = "ecs-rpi4-repo"
}
To push an image to this repository:
eval $(aws ecr get-login --no-include-email --region <region>)
docker tag my-image:latest <aws_account_id>.dkr.ecr.<region>.amazonaws.com/ecs-rpi4-repo:latest
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/ecs-rpi4-repo:latest
2. Create an Amazon SQS Queue
resource "aws_sqs_queue" "task_queue" {
name = "ecs-task-queue"
message_retention_seconds = 86400
}
3. Create an Amazon S3 Bucket and Event Notification
resource "aws_s3_bucket" "data_bucket" {
bucket = "ecs-rpi4-data-bucket"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.data_bucket.id
queue {
queue_arn = aws_sqs_queue.task_queue.arn
events = ["s3:ObjectCreated:*"]
}
}
4. Create an Amazon DynamoDB Table
resource "aws_dynamodb_table" "processed_data" {
name = "ecs-processed-data"
billing_mode = "PAY_PER_REQUEST"
hash_key = "object_key"
attribute {
name = "object_key"
type = "S"
}
}
Python Application for Batch Processing
Below is the Python application that listens to the SQS queue, processes S3 objects, and stores the results in DynamoDB.
import boto3
import json
import time
dynamodb = boto3.resource('dynamodb')
s3 = boto3.client('s3')
sqs = boto3.client('sqs')
table = dynamodb.Table('ecs-processed-data')
queue_url = '<your_sqs_queue_url>'
def process_object(bucket, key):
print(f"Processing {key} from {bucket}")
response = s3.get_object(Bucket=bucket, Key=key)
content = response['Body'].read().decode('utf-8')
# Mock API processing
result = f"Processed: {content.upper()}"
# Store in DynamoDB
table.put_item(Item={'object_key': key, 'processed_content': result})
print(f"Stored result for {key}")
while True:
messages = sqs.receive_message(QueueUrl=queue_url, MaxNumberOfMessages=1, WaitTimeSeconds=10)
if 'Messages' in messages:
for message in messages['Messages']:
body = json.loads(message['Body'])
records = json.loads(body['Message'])['Records']
for record in records:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
process_object(bucket, key)
sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=message['ReceiptHandle'])
time.sleep(5)
This setup allows the ECS tasks to process files uploaded to S3, extract content, process it using a mocked API, and store the processed data in DynamoDB.