Using Jenkins Configuration as Code to setup AWS slave agents automatically

Jenkins configuration as code with AWS slave agents

Last Updated on August 4, 2022

Setting up a Jenkins cloud configuration allows you to run jobs on slaves agents, offloading your job workload to a container orchestration framework such as AWS ECS. If you’ve ever done this, you’ll know that the manual configuration setup is complex and one small mistake means that your Jenkins job won’t start.

In this article you’ll learn how to automate your Jenkins cloud configuration setup using Jenkins Configuration as Code (JCasC). This means your cloud configuration will be reproducible, version controlled, and dynamic based on your ever-changing infrastructure.

This is the third article in this three-part series about deploying Jenkins into AWS. Here are details of all three articles:

  • in Part 1 Deploy your own production-ready Jenkins in AWS ECS we explored how to setup a robust Jenkins master in AWS using CloudFormation
  • in Part 2 Running Jenkins jobs in AWS ECS with slave agents we got slave jobs running in ECS through a full worked example, doing all the cloud configuration manually for a full understanding of the process
  • in Part 3 Using Jenkins Configuration as Code to setup AWS slave agents (this article) we’ll improve what we had in part 2 by setting up our Jenkins master’s cloud configuration automatically using Jenkins Configuration as Code

Running Jenkins slaves in AWS ECS recap

In the previous article Running Jenkins jobs in AWS ECS with slave agents, we explored the advantages of running jobs in containers, outside of the Jenkins master. By the end of the article, we had a working example setup in AWS, with a Jenkins master able to run jobs within ECS.

It’s fair to say that the example had a few manual steps to get us there. OK, maybe more than a few then! 17 configuration parameters were required to:

  • setup a link between Jenkins and AWS ECS – this included telling Jenkins what ECS cluster and subnets to deploy the slave in
  • describe the type of Jenkins slave to be created – this included telling Jenkins what Docker image to use, IAM roles, security groups, and container CPU & memory requirements

Maybe you’re thinking this manual configuration is fine as a one-off?

In my experience though, it never is just a one-off. Jenkins has an annoying habit of losing the cloud configuration when its version is updated. Plus, what if you ever need to rebuild your Jenkins from scratch in a disaster recovery scenario?

Why not follow the same principles used to define AWS infrastructure with Jenkins? Version controlled template specifications using tools like AWS CloudFormation make infrastructure easily reproducible. Jenkins has been a bit slow to catch-up, but fortunately now has the Jenkins Configuration as Code plugin to allow us to do just that.

Jenkins configuration as code project

Jenkins Configuration as Code (JCasC) currently exists as a plugin, which has been around since March 2019. It has the excellent goals of configuration with:

  • no hands on keyboard
  • no click on UI

So rather than clicking about in the UI, you define your Jenkins configuration in a YAML file which then gets ingested and applied by the plugin. In fact, all we have to do is:

  1. apply the configuration-as-code plugin to Jenkins master
  2. specify a configuration file /var/jenkins_home/jenkins.yaml
  3. make sure that file contains a valid and relevant configuration for your Jenkins (the hard bit)

Fortunately, this plugin works both ways. You can export an existing UI-generated Jenkins configuration to help write your jenkins.yaml file. This makes things a lot easier, since it can be difficult to figure out how to construct the template for a specific configuration.

Once you’ve applied the configuration-as-code plugin to your Jenkins instance, you’ll get an additional option under Manage Jenkins:

Clicking this allows you to download or view your current configuration as a YAML template.

The idea is that the exported template can then help you to define the template to be imported. Do take note of the warning though, which suggest that the exported configuration cannot always be reimported without changes.

This article isn’t a full exploration into this plugin (see the plugin’s website for that). Instead, it’s an introduction to the plugin using a real use case of needing a Jenkins master that runs jobs in ECS. In fact, that’s what we’re going to do next. 👌

An example Jenkins project in AWS

Let’s jump into getting a Jenkins environment setup with a Jenkins master running jobs in slave agents in AWS ECS. And remember, there must be no manual configuration. Got it?

Much of this example will build on top of the example from the previous article Running Jenkins jobs in AWS ECS with slave agents, so be sure to check it out first. We’ll be making the following improvements:

  1. add our own Docker image – rather than using the jenkins/jenkins:lts Docker image, we’ll build our own so we can specify all of our own custom configurations using the Jenkins Configuration as Code plugin
  2. make CloudFormation changes – our ECS task definition will need to reference the new Docker image, pass through various infrastructure related environment variables, as well as defining a default Jenkins password in AWS Secrets Manager

Jenkins master with automatic cloud configuration setup

To setup Jenkins with the configuration needed to run slaves in ECS, we need to create our own Docker image for Jenkins.

Using Docker, we’ll be able to:

  • install all the plugins we need
  • include the required Jenkins configuration files

As you probably know, to create a Docker image you need a Dockerfile. Ours looks like this:

FROM jenkins/jenkins:2.346.2-jdk11

COPY jenkins-resources/plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli --plugin-file /usr/share/jenkins/ref/plugins.txt

COPY jenkins-resources/initialConfig.groovy /usr/share/jenkins/ref/init.groovy.d/initialConfigs.groovy
COPY jenkins-resources/jenkins.yaml /usr/share/jenkins/ref/jenkins.yaml
COPY jenkins-resources/slaveTestJob.xml /usr/share/jenkins/ref/jobs/slave-test/config.xml

ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
  • we’re using the latest Jenkins JDK 11 image at the time of writing
  • we’re copying a file plugins.txt into the image, and running a script which installs the plugins. The file includes only this minimum set of plugins we require:
    • amazon-ecs – allows Jenkins to run jobs on slave agents in AWS ECS
    • configuration-as-code – applies Jenkins configuration from a template file
    • workflow-aggregator – allows the creation of pipeline jobs
  • a file initialConfig.groovy is copied into /usr/share/jenkins/ref/init.groovy.d. Any Groovy scripts Jenkins finds in this directory when it starts up will be executed. In this case, the script tells Jenkins what is its external facing URL
  • a file jenkins.yaml is copied into the Jenkins home directory. This is the main Jenkins Configuration as Code file, and we’ll get into the nitty-gritty details in a second.
  • a file slaveTestJob.xml is copied into the Jenkins job directory. This is a pipeline job configured to run on a Jenkins agent with label ecs (more details shortly).
  • a system property is passed to Jenkins so it doesn’t show us the setup wizard on startup

Jenkins config locations – when the Jenkins Docker image starts, anything located in /usr/share/jenkins/ref will get copied to /var/jenkins_home, the default Jenkins home directory. This is required because we’re mounting an EFS volume at /var/jenkins_home. Any files that are at this location before the volume is mounted would disappear, so the files have to be copied after mounting.

Jenkins configuration as code template

Here’s the YAML file which represents the configuration which will be applied to Jenkins on startup:

  slaveAgentPort: 50000
  systemMessage: Jenkins with AWS ECS demo
    - JNLP4-connect
      allowAnonymousRead: false
    enabled: true
      allowsSignup: false
        - id: ${JENKINS_USERNAME}
          password: ${JENKINS_PASSWORD}
    - ecs:
        credentialsId: ''
        cluster: ${ECS_AGENT_CLUSTER}
        name: ecs-cloud
        regionName: ${AWS_REGION}
        jenkinsUrl: ${JENKINS_URL}
          - assignPublicIp: true
            cpu: 1024
            executionRole: ${AGENT_EXECUTION_ROLE_ARN}
            image: jenkins/inbound-agent:alpine
            label: ecs
            launchType: FARGATE
            logDriver: awslogs
              - name: awslogs-group
                value: ECSLogGroup-jenkins-for-ecs-with-agents
              - name: awslogs-region
                value: ${AWS_REGION}
              - name: awslogs-stream-prefix
                value: jenkins-agent
            memoryReservation: 2048
            securityGroups: ${AGENT_SECURITY_GROUP_ID}
            subnets: ${SUBNET_IDS}
            templateName: jenkins-agent

The file is spilt into two segments, Jenkins configuration and cloud configuration.

Jenkins configuration

Here we’re setting up user access to Jenkins. The JENKINS_USERNAME and JENKINS_PASSWORD environment variables are injected at runtime, which can later be used to access the Jenkins UI

Cloud configuration

This is the real meat of the configuration, and relates almost exactly to the table of configuration from the previous article. If you haven’t read it then this can be summarised as:

  • configuration for connecting with an AWS ECS cluster – the AWS credentials themselves are derived from the AWS IAM role attached to the Jenkins master, but we have to include things like the cluster name and tunnel details so that the Jenkins slaves can communicate with their master inside our private network.
  • configuration for each ECS agent template – the ECS agent template equates to an ECS task definition, which describes how a slave is going to be run as an ECS task. This includes the Docker image reference, log configuration, security groups, and subnets. You can have many different ECS agent templates for different types of jobs you might need to run. We’ll just have one, called ecs.

Notice in the template we use a lot of environment variables. This is because these values change based on our infrastructure setup. For example, subnet ids are generated randomly by AWS so will be different when you run this example to when I run it. You’ll see in the CloudFormation below how these environment variables are passed through to the ECS task when it starts.

How did I figure out this configuration? I was helped greatly by the Fargate example given on the amazon-ecs plugin GitHub page. I also used trial-and-error to determine which properties were needed and which weren’t e.g. jenkinsUrl: ${JENKINS_URL} is needed above, even though it doesn’t need to be specified when configuring via the UI.

Automating job creation

Jenkins Configuration as Code explicitly doesn’t concern itself with job creation. We’ve covered this topic in other articles such as Building a Spring Boot application in Jenkins where we use the Jenkins Job DSL plugin to create jobs. This time though, we’re going to keep things super-simple and just define a job using Jenkins’ native XML format.

Into the Dockerfile we copy slaveTestJob.xml, which defines a very simple pipeline job.

<?xml version="1.0" encoding="UTF-8"?><flow-definition>
    <definition class="org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition">
pipeline {
    agent {
        label 'ecs'

    stages {
        stage('Awesomeness') {
            steps {
                echo 'Hello from Jenkins slave!'

Don’t worry about the different XML tags as this is just the format Jenkins understands. Between the <script> tags though, we’re defining a simple pipeline which importantly is configured to run on an agent with label ecs. This label corresponds with the name of the ECS agent template defined in jenkins.yaml.

Accessing the Docker image

Since I’ve only given you the highlights of the Docker image here, I thought it would only be fair to provide you access to the source which is over in the jenkins-ecs-agents GitHub repository.

The Docker image itself is hosted in the jenkins-ecs-agents Docker Hub repository. We’ll be referencing the image in the CloudFormation in the next section.

CloudFormation changes

Let’s take this configuration-as-code thing all the way and do some infrastructure-as-code too! One of the best ways to do this, I believe, is to use AWS CloudFormation to define you desired infrastructure state in a YAML template.

In Running Jenkins jobs in AWS ECS with slave agents we had already built up a CloudFormation template, so we’ll be building on top of it by:

  1. passing in some new parameters for the default Jenkins username and Jenkins URL
  2. adding an AWS Secrets Manager secret to store an autogenerated password for Jenkins login
  3. referencing the new Docker image in the ECS container definition
  4. adding a load of environment variables to pass in various infrastructure related values referenced in jenkins.yaml

New CloudFormation parameters

Let’s add the new parameters which we’ll reference later on.

    Type: String
    Default: developer
    Type: String
    Description: Public URL of your Jenkins instance e.g.
  • the JenkinsUsername parameter has a default value of developer, but can be modified if you like
  • the JenkinsURL parameter must be the public URL of your Jenkins master, otherwise the Jenkins slave job won’t start correctly.

Jenkins password secret

By creating a secret using AWS Secrets Manager we can have it automatically generate an initial password for Jenkins.

    Type: AWS::SecretsManager::Secret
      Name: JenkinsPasswordSecret
        PasswordLength: 30
        ExcludeCharacters: '"@/\'
  • let’s be security conscious developers, and give it a nice long password. 🔒

Update the Jenkins master ECS task

The ECS task’s ContainerDefinition now needs to reference the new Docker image:

        - Name: jenkins
          Image: tkgregory/jenkins-ecs-agents:latest

We also need to pass in a load of environment variables to be used in the jenkins.yaml configuration file.

            - Name: AGENT_EXECUTION_ROLE_ARN
              Value: !GetAtt JenkinsExecutionRole.Arn
            - Name: AGENT_SECURITY_GROUP_ID
              Value: !Ref JenkinsAgentSecurityGroup
            - Name: AWS_REGION
              Value: !Ref AWS::Region
            - Name: ECS_AGENT_CLUSTER
              Value: !Ref ClusterName
            - Name: JENKINS_URL
              Value: !Ref JenkinsURL
            - Name: LOG_GROUP_NAME
              Value: !Ref CloudwatchLogsGroup
              Value: !Join
                - ''
                - - !GetAtt DiscoveryService.Name
                  - '.'
                  - !Ref AWS::StackName
                  - :50000
            - Name: SUBNET_IDS
              Value: !Join
                - ''
                - - !GetAtt VPCStack.Outputs.PrivateSubnet1
                  - ','
                  - !GetAtt VPCStack.Outputs.PrivateSubnet2
            - Name: JENKINS_USERNAME
              Value: !Ref JenkinsUsername

Each of these values is either referenced directly from a CloudFormation resource, or is concatenated with other values using the !Join function.

Lastly, we have to pass the Jenkins password stored in Secrets Manager through to the container, like this:

            - Name: JENKINS_PASSWORD
              ValueFrom: !Ref JenkinsPasswordSecret

In practice this will also get injected into the container as an environment variable to be used in jenkins.yaml.

To see all the above CloudFormation changes inline, take a look at jenkins-for-ecs-with-agents-autoconfigured.yml.

Launching the stack

Enough talking, let’s get down to business and get this Jenkins environment deployed into AWS. Just click the Launch Stack button below to deploy Jenkins to your own AWS account.

Launch CloudFormation stack

Note that this CloudFormation template works independently of those in the previous articles in this series.

On the Quick create stack page you can leave all parameters as the default values, except:

  • CertificateArn which must be set to the ARN of the certificate you’re using to provide access to Jenkins over HTTPS (see this article for more info on certificate setup)
  • JenkinsURL which must be set to the URL you’ll use to access Jenkins (see this article for more info on DNS setup)

At the bottom of the page accept the additional required capabilities, then click Create stack. Wait 10 minutes for your stack to finish creating and enter the CREATE_COMPLETE status:

Your Jenkins instance is now ready to serve. Don’t forget to configure your DNS provider to point to the load balancer’s domain name as described here.

Trying it out

Now this last section should be very quick if we’ve done everything correctly.

  1. Grab the Jenkins password by going to Services > Secrets Manager > JenkinsPasswordSecret and click Retrieve secret value
  2. Log into Jenkins using username developer and the password from step 1
  3. Click on the slave-test job
  4. Run it by clicking Build Now
  5. Click on Console Output

Once the job competed you should now see a fun and imaginative message like this:

Awesome! So our job ran on an ECS Jenkins slave and we didn’t have to click around the UI?

Yep, only Jenkins configuration as code for me from now on. 👍

But Tom, it doesn’t work…

Don’t worry! Best thing to do in this case is either:

  • look for a hint as to what went wrong in the job’s Console Output
  • look in the Jenkins logs by going to Manage Jenkins > System Log > All Jenkins Logs. Scroll to the bottom and normally there is a helpful error message describing what went wrong.
  • if those options don’t work you can email me at and I’ll scratch my head and see if I can help

Tear down

Don’t forget to delete your AWS resources to avoid unnecessary charges by going to Services > CloudFormation, then select the jenkins-for-ecs-with-agents stack and hit Delete.

Discover more Jenkins CloudFormation templates

Liked the example from this article?
Check out my premium one-click Jenkins CloudFormation templates covering many different use cases.

✅ Run Jenkins securely in your own AWS account
✅ Try out different scenarios for running Jenkins in AWS
✅ Take the bits you like and incorporate them into your own templates

Want to learn more about Jenkins?
Check out the full selection of Jenkins tutorials.


Want to learn more about Jenkins?
Check out the full selection of Jenkins tutorials.

Using Jenkins Configuration as Code to setup AWS slave agents automatically

40 thoughts on “Using Jenkins Configuration as Code to setup AWS slave agents automatically

  1. Hi Tom, this example works perfect but if you update the amazon-ecs plugin In jerkins to the version 1.46 it wont work anymore. it starts triggering this errors :

    Asked to provision 1 agent(s) for: ecs
    Nov 23, 2022 6:08:09 PM INFO com.cloudbees.jenkins.plugins.amazonecs.ECSCloud provision
    Will provision ecs-cloud-ecs-9v090, for label: ecs
    Nov 23, 2022 6:08:09 PM SEVERE hudson.triggers.SafeTimerTask run
    Timer task hudson.slaves.NodeProvisioner$NodeProvisionerInvoker@7880ad82 failed
    at hudson.slaves.NodeProvisioner$PlannedNode.(
    at com.cloudbees.jenkins.plugins.amazonecs.ECSCloud.provision(
    at com.cloudbees.jenkins.plugins.amazonecs.ECSProvisioningStrategy.apply(
    at hudson.slaves.NodeProvisioner.update(
    at hudson.slaves.NodeProvisioner.access$1000(
    at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(
    at java.base/java.util.concurrent.Executors$
    at java.base/java.util.concurrent.FutureTask.runAndReset(
    at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.base/java.util.concurrent.ThreadPoolExecutor$
    at java.base/

    thank you

      1. Hello Tom, I have the same issue with the ecs-cloud plugin 1.46. If I am using 1.41 version is working properly. Do you have any idea why is happening this?

  2. ok – found where to look – but not particularly helpful:
    2022-11-11 20:51:08.920+0000 [id=34] SEVERE hudson.triggers.SafeTimerTask#run: Timer task hudson.slaves.NodeProvisioner$NodeProvisionerInvoker@f71d346 failed
    at hudson.slaves.NodeProvisioner$PlannedNode.(
    at com.cloudbees.jenkins.plugins.amazonecs.ECSCloud.provision(
    at com.cloudbees.jenkins.plugins.amazonecs.ECSProvisioningStrategy.apply(
    at hudson.slaves.NodeProvisioner.update(
    at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(
    at java.base/java.util.concurrent.Executors$
    at java.base/java.util.concurrent.FutureTask.runAndReset(
    at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.base/java.util.concurrent.ThreadPoolExecutor$
    at java.base/
    2022-11-11 20:51:18.920+0000 [id=32] INFO c.c.j.plugins.amazonecs.ECSCloud#provision: Asked to provision 1 agent(s) for: ecs
    2022-11-11 20:51:18.920+0000 [id=32] INFO c.c.j.plugins.amazonecs.ECSCloud#provision: Will provision ecs-cloud-ecs-khzcn, for label: ecs

    1. and I had originally modified the templates, just to increase versions, but reverted that and still stuck in same place….
      source of 1.41 amazon-ecs ECSCloud:283 = (added in 283 marker)

      new NodeProvisioner.PlannedNode(
      (283) new ProvisioningCallback(merged, agentName)

      1. if I’ve spelunked the jenkins code right – the key line there is:
        if(displayName==null || future==null || numExecutors<1) throw new IllegalArgumentException();

        displayname isn't null (as it's logged)
        numExecutors isn't < 1 as is hard-coded to pass in as 1
        so what's left is the Future from Computer.threadPoolForRemoting.submit

      2. ok – dropped the stack, reverted all the way back down to using your tkgregory/jenkins-ecs-agents:latest for the task definition instead of mine (which copied your files completely) – recreated, and it works… Now at least I have a ‘known good’ to start from. Thanks for putting this whole thing out there, and for the fast responses!

  3. Ran the setup, but starting the ‘slave-test’ gets stuck on ‘Waiting for next available executor’ – any guidance for where to look for problem?

    1. Hi Dan. Did you try out any suggestions in the But Tom, it doesn’t work section yet? Also, are you using the provided CloudFormation template, or creating your own?

  4. Hi Tom.

    Is it possible something hibrid ?

    Using Jenkins Master as Fargate and Jenkins Slaves as a simple EC2 ?

  5. Hi Tom.
    Could you help me ?

    When the slave agents is launched, The cloudwatch log events shows this exception:
    “No Working Directory. Using the legacy JAR Cache location: /home/jenkins/.jenkins/cache/jars”

    So it is stopped and master continues trying create agent slave

    What could be the problem ?


    1. One thing:

      I reproduced all this structure using Terraform, with public subnets instead private subnets and NATs

    2. Hi Rogerio. I haven’t seen your error before. Are you using the supplied CloudFormation template without modification? I’ve just tried it and I can successfully run the job on the slave agent. Please give more details so I can help you.

      1. Just saw your follow up. Since you have translated this to Terraform, I would first deploy the supplied CloudFormation template and ensure that works. Then you can figure out where the difference lies within the Terraform.

      2. Hi Tom

        I found the problem.
        I had inverted setting in security group.
        So, when the agent task was fired the process was denied.

        Now it works successfully


  6. Hey Tom,

    Excellent blog entry and recipe for setting up Jenkins!

    I had one problem, though, which I thought was worth mentioning.

    My AWS account had already reached the 5 Elastic IP limit and the CloudFormation build failed due to the stack requiring 2 more. I increased the quota using the appropriate AWS form and the stack was then able to build correctly.

    Hope this helps someone!

    1. Hi Greg. Glad it helped! Yes, thankfully the Elastic IP limit is a soft limit which you can increase. Thanks for sharing.

  7. I have tried to set up the agent based on your previous post and got an error with connecting the agent. I thought maybe it is something i still missed or changed without noticing – so i deleted everything and imported this task with the agent already configured but unfortunately the problem persists! No matter what i am doing, i still cannot connect my agent. My only concern would be that i set up the load balancer CNAME record on my external DNS but as i think you did the same?

    What could be the reason? I still try since 2 days to continuously get it fixed but still have no idea and did not find any hint on the net.
    This is what i get as i try to connect to my agent using the CLI (java -jar agent.jar …..)

    INFO: Agent discovery successful
    Agent address: 8fbe7fc288d7426aab8d2c15eca9e876.jenkins.ecsjenkinsagent
    Agent port: 50000
    Identity: 6d:50:16:d5:32:e5:98:30:05:79:4f:7e:77:cb:11:84
    Feb. 18, 2022 5:42:16 PM hudson.remoting.jnlp.Main$CuiListener status
    INFO: Handshaking
    Feb. 18, 2022 5:42:16 PM hudson.remoting.jnlp.Main$CuiListener status
    INFO: Connecting to 8fbe7fc288d7426aab8d2c15eca9e876.jenkins.ecsjenkinsagent:50000
    Feb. 18, 2022 5:42:16 PM hudson.remoting.jnlp.Main$CuiListener error
    SEVERE: null
    at java.base/
    at java.base/
    at java.base/
    at java.base/
    at java.base/
    at hudson.remoting.Engine.connectTcp(
    at hudson.remoting.Engine.innerRun(

    PLLEASE ,if you had similar issues or can give me a hint of what i could try help me out on this. And thanks a lot again for all the good work you do.

    1. Hi Daniel. So you tried the CloudFormation from this article and it’s still not working? I have used this template yesterday without issue.

      I have a suggestion though. Take a look at the agent ECS task when it starts and inspect Containers > Command which is the command that is passed to the container. In particular we’re interested in the Jenkins URL, which you can check for validity.

      Mine shows like this:

      Command [“-url”,””,”-tunnel”,”jenkins.jenkins-for-ecs-with-agents-autoconfigured:50000″,”86cf08dac6120386de1ffbb8b12e62dca3113ed8cd38354ec99fca84d89728a3″,”ecs-cloud-ecs-gf8a1″]

      Can you verify that the agent will be able to connect to the URL specified?

  8. Hi Tom,
    Thanks for this awesome article.

    I was hopping if you can help with one doubt regarding configuration as a code.

    I did exactly the way you have described, After first creation of the service I tried adding few things to jenkins.yaml like allowedOverrides in cloud::ecs.
    Now I have new image on dockerhub and also updated the image tag in cloudformation template.
    After these changes i was expecting the same to be reflected on the jenkins, but this in not the case.

    Can you please guide me to the right direction?

    1. Hi Ujjwal. Did you deploy the new CloudFormation template? Take a look at the ECS task definition in the AWS Console to see what image it’s using.

      1. Yes I did deploy the changes, also confirmed the same in task definition.

        I believe the problem is with mountPoints in containerDefinition. Will the existing jenkins.yaml file in efs be replaced with the new one from the container?

        My understanding of docker is that If the host volume/mount exists (in this case its EFS) and contains files it will “override” whatever is in the container. If not, the container files will be mirrored onto the host volume/mount and the container folder and the host will be in sync.

  9. Hi Tom,

    We have deployed master and slave. Slave test job is working fine.

    Could you please let us know how do you install awscli and docker on the containers?

    Do we need to install on master? The jobs will run on the slave right?

    At the moment , i am not able to use any AWS commands such as aws s3 ls.

    1. You can extend the jenkins/inbound-agent:alpine Docker image to include the AWS CLI command.

      Once you’ve built and published it, in Jenkins you can configure the Docker Image value within ECS agent templates.

  10. Hi Tom,

    Thanks for this awesome article.

    One bit I was hoping you could expand in is you mention that:

    “when the Jenkins Docker image starts, anything located in /usr/share/jenkins will get copied to /var/jenkins_home,”.
    I am really struggling to see what bit of config causes this to happen? Could you explain/point me in the right direction please?


    1. Hi George. This functionality comes with the Jenkins Docker image.

      In such a derived image, you can customize your jenkins instance with hook scripts or additional plugins. For this purpose, use /usr/share/jenkins/ref as a place to define the default JENKINS_HOME content you wish the target installation to look like.

  11. This was a real help. Was struggling to move from one build system cobbled together over years to a full ‘build from scratch’ repeatability.

    Many thanks

  12. Hi Tom,
    The Jenkins on ECS series is really interesting. Thank you for the simple explanations.
    Do you think we can mount a common efs for all the Jenkins agents on FARGATE ? This would be helpful for workloads built using maven(to cache maven artefacts), without the efs the containers are pulling the maven artefacts for every build. Another use case would be to run parallel task with multiple containers on the same code source/binary.

    1. Hi Subhendu. This is a very interesting suggestion. I agree that having some way to use cached Maven/Gradle artifacts would save time, but I haven’t used EFS in this way before. Let me know how you get on.

      Regarding your parallel pipeline stages, you could look into using the stash feature to share previously built files between parallel stages running on different agents.

  13. Thanks Tom, a really great series of blogs that really helped me get my own jenkins cluster running on fargate. I did have one question though, is there a way to configure the master docker image to pull the configuration from a git repository on startup instead of having to copy the yaml into the Docker image directly ?

    1. Hi Rich. Glad the articles helped you.

      Interesting suggestion, and it does make things a bit more dynamic. The downside might be the risk of committing a broken configuration, and then having your Jenkins instance fail to start.

      If you wanted to implement that, you could just have a script that runs as part of the Docker CMD instruction which goes off and gets your configuration from Git on container startup.

      Also worth checking out the docs on Triggering Configuration Reload which could be a slightly different approach.

  14. Thanks for this Tom! Such a great and very helpful blogs! This tutorial helped me a lot to have a production setup of my CI/CD stack. There’s some minor change that I made, instead of using jenkins/jenkins:2.249.1-jdk11, I used the latest version (jenkins/jenkins:lts) and for the agent as well, jenkins/inbound-agent:latest.

    1. Hi Troy. You’re welcome, and thanks for the suggestion! I usually prefer to use a fixed version and regularly update it, to avoid any surprises.

  15. Hi Tom,

    Thanks for your reply, I’m actually still running your example job so the memory should have been kept at a minimum really. I tried increasing the memory and still got the same errors.

    I just tried switching the agent docker image to use jenkins/inbound-agent:latest rather then alpine and now everything seems to be working correctly. Not sure if they have released something since you did this tutorial that breaks they alpine version on ECS.

    Also I used CDK to deploy the stack so there is a chance that I have a slightly different configuration up to yours somewhere, although I can’t see where.


  16. Firstly, thanks for this awesome tutorial, it saved me so much time and taught me a lot around setting up Jenkins correctly.

    One issue I seem to be getting a lot is this error:
    SEVERE: Failed to connect to https://jenkins.MYDOMAIN.COM/tcpSlaveAgentListener/: connect timed out

    The error causes that ECS task to fail. However the master then retires it after a few minutes and on this second attempt it works as expect. Have you ever come across anything like that and have you got any recommendations for debugging the issue?

    1. Hi George. Glad the tutorial helped you.

      I’m not sure why this error is happening. Did you try increasing the Jenkins task definition CPU and memory in the CloudFormation template? It’s currently set to 512 CPU and 1024 memory, which may not be enough for your workloads.

      Are you running heavy workloads? Do you see any errors in the Jenkins logs?

      Let me know how you get on.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top