AWS Free Tier, Docker and Jenkins: smart resources handling with CloudWatch Events and Slack

         · · ·      · · · · · · · · ·


If you have an AWS account in Free Tier, you have (updated: March, 13th 2018) 750 hours/month to run EC2 (small ones) in your VPC. You also have a lot of other resources, such as AWS Lambda functions (I wrote about them here and here) and CloudWatch Events. In this article, I talk about smart resources handling and some trick - actually, not so smart XD - I setup to take the best from the services. Attention!!! Picture Spoiler


For this article, you will need the following: - An AWS account (free tier it’s ok, but API Gateway is not included); - AWS EC2; - AWS Lambda; - AWS CloudWatch Events; - Slack - it’s a plus;


As I already said in a preview post on AWS Services, I recommend you to pay a lot of attention. You always have to know exactly what are you doing, to avoid surprise in billing in the end of the month. Fortunately, there are a lot of documentations on Amazon official site, so you only have to read them.

What and when

Ok, let’s start from the AWS EC2. You can create really slow and not-efficient machines of type t2-what? ssh-session expired. Just kidding.

The crucial part is that you have 750 hours free. Despite the case you can change Amazon Time, this implies that you can have an instance up and running for 1 year: in fact, 24*31 = 744 < 750. That’s a huge amount of time before closing your AWS account. Of course, you will not use all this hours of execution in a month. Let’s suppose you will use 12 hours a day - it’s too much, but let’s keep it simple. This was my initial setup: in this case, you can run 2 instances for 12 hours a day for 1 years. Not’s so bad, even if the instances are really slow. You can scale di reasoning up to 6 instances for 4 hours a day…even 24 for 1 hours: if you’re wondering what you can do with 24 1 vCPU with 1 GB of RAM, ask to those crazy guys in the wab that built a 64-Raspberry PI Cluster. Got it?


I will talk about a really simple (also, not so efficient) scenario: 2 EC2 machines, the first one to run a Jenkins and the second one to run docker-daemon. I will not go through the details about how to setup a Jenkins CI/CD pipeline, there are plenty of guides better than mine. Instead, I will show you the step to automatically reach your instances after each reboot: this problem is arised by AWS, because each time you reboot an instance, if no Elastic IP (more here) is attached, another Public IP will be assigned and you don’t know a priori which one. First, let’s talk about scheduling of start and stop.

Step 15: Start and Stop your instances

There are many ways (maybe?) to start and stop your instances: I decided to use AWS Lambda because you can easily create and manage them. This time I decided to use Python to build my StartAndStop Lambda. Before going ahead with code, please remember to assign the right policy to the Role used by the Lambda function (you can choose the role during the setup after the click of Create Function button). From the official AWS Doc, the Statement to add to the policy attached to the Role you choose is the following:

    "Effect": "Allow",
    "Action": [
    "Resource": "*"

The code of the function is really simple:

import boto3, os

def lambda_handler(event, context):
    ec2 = boto3.client('ec2', region_name=os.environ['region'])

    if len(event['instances']) == 0:
        return { "message" : "No instances passed" }

    if event['action'].encode("utf-8") != 'start' and event['action'].encode("utf-8") != 'stop':
        return { "message" : "Passed action not allowed: '" + event['action'].encode("utf-8") + " over '" + event['action'].encode("utf-8")+"'" }

    if event['action'].encode("utf-8") == 'start':
        response = ec2.start_instances(InstanceIds=event['instances'])

    if event['action'].encode("utf-8") == 'stop':
        response = ec2.stop_instances(InstanceIds=event['instances'])

    return response

This AWS Lambda use boto3 library to start and stop instances: the expected request is like the following:

  "action": "stop",
  "instances": [
    "YourJenkinsInstanceID", "YourDockerInstanceID"

Done? Let’s go with CloudWatch Events.

Step 25: Schedule AWS Lambda execution

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. More in details, I used ScheduledEvents function to schedule my Lambda: with Scheduled Events, you can create rules that self-trigger on an automated schedule in CloudWatch Events using cron or rate expressions. All scheduled events use UTC time zone and the minimum precision for schedules is 1 minute.

Field Values Wildcards
Minutes 0-59 , - * /
Hours 0-23 , - * /
Day-of-month 1-31 , - * ? / L W
Month 1-12 or JAN-DEC , - * /
Day-of-week 1-7 or SUN-SAT , - * ? L #
Year 1970-2199 , - * /

I want to start my docker-daemon and jenkins server both at 9am and stop them at 9pm. To do that:

            0 8 * * ? *

{ "action": "start", "instances": [ "YourJenkinsInstanceID", "YourDockerInstanceID"] }

To stop instances, create another rule with 0 20 * * ? * - remember that it is UTC time zone!!! - and invoke the same AWS Lambda function with action value equal to “stop” in your Constant JSON input.

Step 35: Have always updated DNS

The best way to deal with Public DNS is by assigning an Elastip IP to your instances, or setup a third-level-domain service inside each of your instance (like no-ip)…or use again Lambda (and Slack) to alert you whanever there is a change of status.

This time, the IAM Policy System requires from you (actually, your Lambda function) something more: this function should be able to ask for instance details. To do that, it needs - at least - read access to EC2 Instances details (I think there is a policy to do that). In any case, you don’t have to deal with super restrictive policy, because your Lambda is not exposed through API Gateway and is called only by a CloudWatch Event. Thus, you can add - at least, for this experiment - the AmazonEC2FullAccess managed policy (this is the description page, I guess) to your Role.

For this Lambda, I used Node.js (just to make things a little bit confusing for you). First, install slack-node in the folder you will upload to your AWS Lambda Console with the command

npm install slack-node --save

Then, create a index.js file in the folder and copy and paste the following.

const Slack = require("slack-node");
// Load the AWS SDK for Node.js
const AWS = require("aws-sdk");
// Set the region
AWS.config.update({region: process.env.REGION});

// Create EC2 service object
var ec2 = new AWS.EC2();

function composeMsg(infos) {

    var msg = "";

    for (var info of infos) {
        msg += "Instance: "+info["name"]+" (id: "+info["id"]+", ip: "+info["ip"]+") has status "+info["status"]+"\n"+"PublicDNS: "+info["dns"]+"\n\n";

    return msg;


function slackNotifier(infos) {

    var msg = composeMsg(infos);

    var webhookUri = process.env.SLACK_AWS_WEBHOOK;

    var slack = new Slack();

        channel: process.env.SLACK_AWS_CHANNEL,
        username: process.env.SLACK_AWS_BOT_NAME,
        text: msg
    }, function(err, response) {
        if (err) {

    return msg;


exports.handler = (event, context, callback) => {

    var params = {};

    ec2.describeInstances(params, function(err, data) {

        if (err) {
            console.log(err, err.stack);
        else {
            var count = 0;
            var infos = [];

            data = JSON.parse(JSON.stringify(data));

            // for each instance
            for (var reservartion of data["Reservations"]) {

                var instance = reservartion["Instances"][0];

                // find id
                infos[count]["id"] = instance["InstanceId"];

                // find name
                for (var tag of instance["Tags"]) {
                    if (tag["Key"] == "Name") {
                        infos[count]["name"] = tag["Value"];

                // save status
                infos[count]["status"] = instance["State"]["Name"];

                // save dns and ip
                infos[count]["dns"] = instance["PublicDnsName"];
                infos[count]["ip"] = instance["PublicIpAddress"];

                count = count + 1;



            // send message
            var msg = slackNotifier(infos);
            callback(null, msg);

The code simply calls AWS SDK to get informations - in particular, the instance name, id, status, public dns and ip, even if included in the addreess, to copy and past faster - about your running EC2 instances, parse and format them in a pretty format. Then, a message is sent to a private slack channel (see later). So…zip, upload. Done. Environment variable needed are the REGION (aws-region), SLACK_AWS_BOT_NAME (the name of your slack bot), SLACK_AWS_CHANNEL (the name of your channel) and SLACK_AWS_WEBHOOK (the web.. ok, you got it). You can complete missing env vars later: keep them empty.

There are two things to complete yet: 1) attach this rules to a Cloudwatch Event (of course, the change of state for your EC2 instances) and 2) effectively create a slack channel, a slack-app and corresponding webhook for the created channel. What do you want to do first? Don’t worry, I decide for you: let’s create a Slack channel.

Step 45: Create a Slack Channel (and App and so on)

Preamble: first of all: if you don’t have a Slack account, sign up here. Then, create a work space (it shoule be easy by following instruction after the first access). Done? After that, you should be able to create a channel (I suggetst a private one if this is an experiment). Done? Ok. If you have problem to complete these steps, well… Actually, I can’t help you: I don’t remember, you have to google about them.

To create a Slack App, follow these steps:

Step 55: CloudWatch Event to sent Slack Message

The last step is create a CloudWatch Rule - a new one, keep the start and stop previously created rule untouched - triggered: this time, the trigger is not a Schedule event at fixed time, but a an Event Pattern.

To test the setup, try to stop and start your instances, and you should see something appear on your slack channel!

Other possible scenario

I think I will work on Slack features like button to handle my VPC with a sort of question-answer event-driven bot. But…what if you want to add more worker node to Jenkins? For instance, using a 9-17 setup, you can have a three machine running at time.

But I think this setup is more interesting :D. In the end, having a 4 node k8s cluster with a Jenkins for deploy, for 4 hours each day for one year….for free, is not so bad. I will try, I think performance are around 1980 :D

Thank you everybody for reading!

comments powered by Disqus