Serverless and managed things are the best choices if you don’t want to deal with infrastructure (3 2 1: fight) buuuuut…even immutable things are not so bad for this purpose - at least, if they are immutable for real 🤣 Today I wanna talk about a useful way to run an instance(s) of VSC server in AWS and code from everywhere (yes, even your iPad): let’s start!

This time I will go native: so no CDK, I’m sorry, but pure Cloudformation instead. If you are not interested in all the astonishing things I have to say, you can find the template here.


Ok, before going ahead there are a few things you should have:

The key pair is important: since everything is wrong, starting me, you could need to access your instance to better understand my mistakes. In other words, don’t trust me and, please, prepare a keypair to attach to the instance and let you access it.

For what concerns the domain/certificate part, you can even ignore it if you are not interested in having a custom domain and HTTPS - thus, if you want to just run VSC as it is. Keep in my mind you will have to modify the Cloudformation template accordingly to have it working properly.


The architecture I have in mind for this use case is pretty simple:

I don’t wanna discuss the Slack/Lambda action invoke part: there are plenty of guides to help you realize that part. Instead, I will focus this article and the second part of this series on the central core part: the Cloudformation template to provision the VSC server instance as an immutable resource you can create and destroy by the end of the day. The notification part is included only as a concept idea to keep the stack creation process asynchronous and let you write code without worry about anything in a safe enough environment (at least, I hope 😂).

If you own a domain…

You would like to use it to expose your immutable IDE(s) and use HTTPS. You can easily register a domain by following the guidelines here AWS Route 531. After that, you can use the open CA Let’s Encrypt to get a valid certificate for your domain. The steps to produce the certificates can be summarized in a few steps thanks to Docker!

In the case you use direct credentials to access your AWS account, then you can run the following:

docker run -it --rm -e AWS_ACCESS_KEY_ID="<YOUR_ACCESS_KEY_ID>" -e AWS_SECRET_ACCESS_KEY="<YOUR_SECRET_ACCESS_KEY>" --entrypoint /bin/sh certbot/dns-route53

I don’t know which permissions you need exactly - most probably route53 CRUD is enough - but the container it’s safe and listed as the first option in the list of the ACME v2 Compatible Clients in Let’s Encrypt website.

If you don’t trust containers and Docker, or just in the case you have an admin role to assume to operate in your account, then run the following

docker run -it --rm $(aws --profile YOUR_PROFILE_TO_RUN_STS sts assume-role --role-arn YOUR_ARN_ROLE_TO_ASSUME --role-session-name "certbot" | jq '" -e AWS_ACCESS_KEY_ID=" + .Credentials.AccessKeyId + " -e AWS_SECRET_ACCESS_KEY=" + .Credentials.SecretAccessKey + " -e AWS_SESSION_TOKEN=" + .Credentials.SessionToken' | sed s/\"//g) --entrypoint /bin/sh certbot/dns-route53

Thanks to jq, this will propagate your temporary credentials inside the certbot container as environment variables: in both cases, you will enter the container and you will be ok in running the following command.

certbot certonly \
  -d *.YOUR_DOMAIN \ # see later
  --dns-route53 \
  --agree-tos \
  --non-interactive \

This will generate the certificates inside the container, under the path /etc/letsencrypt/archive/. For your interest, given the domain, you can detach how many records you want. I decided to have this entire stack running under the third level domain, and each of the instance(s) running under [a-z] (I will talk about this later). This is the reason you need the *.YOUR_DOMAIN. In a single instance, the setup could be not required.

You can copy the certificates and bring them outside of the container by running something like this:

docker cp YOUR_CERTBOT_CONTAINER_ID:/etc/letsencrypt/archive/FOLDER_OF_CERTS ./

Now, two of these parameters will be required by codeserver to run on HTTPS: they will be named cert1.pem and private1.pem. Since I will use cfn-init (more info here) to setup everything, there’s a good way to safely store these files and it’s, of course, the AWS Parameter Store. In fact, by using the configuration section files, the cfn-init helper script processes are able to retrieve values from SSM automatically. Thus, store the content of these two files as two separate parameters in AWS Parameter Store with simple string as value - I wasn’t able to retrieve them if stored as a secure string, even if it should be available as stated by this announcement. You can easily store the content of the files with the awscli by running:

aws ssm put-parameter \
    --name "" \
    --type "String" \
    --value "$(cat /path/to/your/cert1.pem)" \

and for the private key respectively

aws ssm put-parameter \
    --name "" \
    --type "String" \
    --value "$(cat /path/to/your/private1.pem)" \

Thus, you don’t need to see the content of these files anymore - until the certificate will expire XD. I’m already considering to address this problem with automation by storing these values inside Secret Store and enable automatic rotation. I didn’t focus a lot on this topic, but feel free to try it and let me know!

Let’s finally start with an overview of the setup.


The idea is pretty simple: the guys from have done a great job in packaging the Visual Studio Code to run it as a simple old fashion binary - but even a modern docker container. As you can imagine, it’s pretty simple to configure the application and let it run in the cloud… this time, I discarded the idea of having it running in a container - at least, if you wanna run a single instance. So the idea is pretty simple: you have an EC2 instance, you decide the family type and a few other options. This instance doesn’t have anything else, because it’s intended to be as much as possible immutable as the VSC server binary: it just starts and exposes the service. When everything it’s up, you can run a git pull and start coding from your browser!


Let’s analyze the parameters of the Cloudformation you can find here:

Parameter NameDescription
VpcIdRequires you to specify a valid VPC identifier. If you don’t have one created by you, the default will be fine. Just go in the console and retrieve the ID - or choose it from the menu in the Cloudformation console.
SubnetIdRequires you to specify a valid Subnet identifier, with the subnet belonging of course to the VPC specified before. If you don’t have one created by you, one of the default public subnets will be fine. Just go in the console and retrieve the ID - or, again, choose it from the menu in the Cloudformation console.
ExposedPortIt’s a number defining the port to use to expose the application. The default one is 8443, I replaced with 443.
SourceIPLets you specify which is the IP or the IP pool you want to allow in the security group. By default a "" will let you reach the instance once running from everywhere: this could be really dangerous thus it’s strongly suggested to provide restriction over this.
CertificateParameterNameIf you own a domain in AWS and you generated the certificate by following the guideline before, you would like to use the domain to expose your immutable IDE and also use SSL for communication. This Cloudformation parameter contains the name of the AWS parameter that contains the certificate - you have to create the AWS parameter before the creation of the stack (in the example above, this value would be
PrivateKeyParameterNameSee CertificateParameterName.
InstanceFamilyEnter the family type of your instance (like t3.small) Consider that too small instances will run in failure more frequently.
RootVolumeDimensionEnter the dimension of the root volume. In the beginning, I thought to attach a secondary EBS volume as a storage layer, but then I changed my mind because this solution is intended to be as much as possible immutable. So just save your work or push it before deleting the stack, or feel free to add EBS volume handling.
KeyPairNameEnter the name of the key pair you want attached to the instance: you need to create this key before the creation of the stack, and it’s for debugging purpose or if you want to have access to the instance too (it truly depends on your setup).
DomainNameEnter the domain to expose the service (if you read the certificate part above, the
AppPasswordThis is the password will be required to access the instance.


The first thing to notice is: cfn-init doesn’t run by itself. It has to be called (line 120) and this is done by using the UserData section:

    - ""
    - - "#!/bin/bash -xe\n"
      - "/opt/aws/bin/cfn-init -v "
      - "         --stack "
      - Ref: AWS::StackName
      - "         --resource CodeServerPublicInstance"
      - "         --configsets end-to-end"
      - "         --region "
      - Ref: AWS::Region
      - "\n"

Then, the metadata will be parsed: the Metadata section contains both the configSets definition and the steps inside every config step. The Install step just uses the package definition to install Docker, wget and gzip.

        - Install
        - Setup
        - Service
          docker: []
          wget: []
          gzip: []

In the Setup, the certificate files are retrieved from the parameter store (files section) and safely stored inside the machine. A second config section (commands) will execute the required command to retrieve the binary of codeserver, unzip it and place it under the bin folder:

          - ""
          - - ""
            - !Ref CertificateParameterName
      mode: "000600"
      owner: root
      group: root
          - ""
          - - ""
            - !Ref PrivateKeyParameterName
      mode: "000600"
      owner: root
      group: root
      command: wget
      cwd: "~"
      command: tar -xvzf code-server1.1156-vsc1.33.1-linux-x64.tar.gz
      cwd: "~"
      command: cp code-server1.1156-vsc1.33.1-linux-x64/code-server /usr/bin/
      cwd: "~"
      command: chmod +x /usr/bin/code-server
      cwd: "~"

Then, the Service section will configure systemd (buuuuuuu) to run the process. I almost followed the instruction available here to build this config file: I’m not an expert of this kind of thing, so take it as it is without too many warranties :)

      command: !Sub |
        cat <<EOF >> /etc/systemd/system/code-server.service
        Description=Codeserver systemd service

        ExecStart=/usr/bin/code-server --disable-telemetry --password ${AppPassword} --port ${ExposedPort} --cert=/etc/pki/CA/private/cert1.pem --cert-key=/etc/pki/CA/private/privkey1.pem

      cwd: "~"
      command: "chmod 644 /etc/systemd/system/code-server.service"
      cwd: "~"
      command: "systemctl --system daemon-reload"
      cwd: "~"
      command: "systemctl enable code-server "
      cwd: "~"
      command: "systemctl start code-server "
      cwd: "~"
        enabled: "true"
        ensureRunning: "true"

And this is pretty much the whole cfn-init setup process.


To let Cloudformation propagates correctly Route53 record set to let you reach your machine, there’s an AWS::Route53::RecordSet that create the record for you in the form

  Type: AWS::Route53::RecordSet
      Ref: HostedZoneId
    Comment: Codeserver public instance domain name
      - ''
      - - Ref: CodeServerPublicInstance
        - "."
        - Ref: DomainName
        - "."
    Type: A
    TTL: '60'
    - Fn::GetAtt:
      - CodeServerPublicInstance
      - PublicIp

Finally, if you consider the architecture proposed in the beginning, you can even implement your notification part to provide (by email, for example), the temporary Route53 address to reach your machine. And why not, maybe even generate the password to access the application at run time and provide it on a different channel (why not, encrypted with a fixed key you define only once).


This was the first step to go live with you VSC instance and start coding from everywhere. To read about how to transform this stack into a multi-tenant solution to share the same machine across multiple users, then go ahead with the reading and look the Part II - My team run VSC in the browser and they are just fine - Part II 😉!

Thank you everybody for reading!

  1. if you have a domain registered somewhere else, I’m sorry I don’t cover this scenario here because it’s out of the scope of the article but you can google and find how to bring it inside route53. ↩︎