Configuring the Jenkins S3 Publisher Plugin to archive artifacts to Swarm

Using Jenkins for builds or continuous integration generates a lot of artifacts, such as rpm and deb packages and logs. You can preserve and share these artifacts while saving local disk space by publishing them your Swarm via the S3 protocol using the S3 Publisher Plugin.

Even better, this plugin can be configured to add extensive metadata to the published artifacts, allowing you to create Collections for e.g. all the .rpm artifacts from a successful integration test, or all the log files related to a particular source code branch.

Step-by-step guide

  1. Configure Content Gateway in front of your Swarm and enable the S3 protocol. Ideally it would be accessed via an https proxy in front of Gateway (see our haproxy KB).
  2. Using the Content UI, create a domain (e.g. and bucket (e.g. jenkins-artifacts) to store your jenkins artifacts.
  3. Also create an S3 token for the user (e.g. a shared "build" user) which has full read and write permission in that bucket.
  4. Be sure that you can access on the jenkins machine and that it reaches your Content Gateway's S3 port. This "bucket in host" style access requires updating /etc/hosts manually or having your DNS administrator configure a wildcard domain. You should configure s3cmd or rclone on your Jenkins server to verify your S3 token access, as these command-line tools are easier to diagnose.
  5. Go to Manage Jenkins => Manage Plugins and install the S3 Publisher Plugin under the Available tab, if it's not already installed. It must be version 0.10.12 or later, released in 2017. You might have to restart Jenkins.
  6. Go to Manage Jenkins => Configure System => Amazon S3 Profiles and configure a profile, you can name it "caringo", with the access key and secret key from creating the S3 token. You can ignore the warning "Can't connect to S3 service: The AWS Access Key Id you provided does not exist in our records.". Unfortunately there does not seem to be a way to configure the endpoint used here.
  7. Create a directory in your ${JENKINS_HOME} directory like which will contain an updated endpoints.json file:
    $ sudo mkdir -p /var/jenkins_home/caringo-s3-override/com/amazonaws/partitions/override
  8. Starting with, first add "caringo" to the regionRegex, then add a "caringo" region, and finally add your endpoint (domain) under services => s3 => endpoint. Below is an example diff of the changes, or simply download endpoints.json and change the "" endpoint to your own domain.
    root@2822615d6c08:/# diff -u10 /original/endpoints.json /var/jenkins_home/caringo-s3-override/com/amazonaws/partitions/override/endpoints.json
    --- /original/endpoints.json 2018-01-16 21:05:44.870469461 +0000
    +++ /var/jenkins_home/caringo-s3-override/com/amazonaws/partitions/override/endpoints.json 2018-01-15 19:34:09.600137790 +0000
    @@ -6,21 +6,21 @@
             "protocols": [
             "signatureVersions": [
           "dnsSuffix": "",
           "partition": "aws",
           "partitionName": "AWS Standard",
    -      "regionRegex": "^(us|eu|ap|sa|ca)\\-\\w+\\-\\d+$",
    +      "regionRegex": "^(caringo|us|eu|ap|sa|ca)\\-\\w+\\-\\d+$",
           "regions": {
             "ap-northeast-1": {
               "description": "Asia Pacific (Tokyo)"
             "ap-northeast-2": {
               "description": "Asia Pacific (Seoul)"
             "ap-south-1": {
               "description": "Asia Pacific (Mumbai)"
    @@ -52,20 +52,23 @@
               "description": "US East (N. Virginia)"
             "us-east-2": {
               "description": "US East (Ohio)"
             "us-west-1": {
               "description": "US West (N. California)"
             "us-west-2": {
               "description": "US West (Oregon)"
    +        },
    +        "caringo": {
    +          "description": ""
           "services": {
             "acm": {
               "endpoints": {
                 "ap-northeast-1": {},
                 "ap-northeast-2": {},
                 "ap-south-1": {},
                 "ap-southeast-1": {},
                 "ap-southeast-2": {},
    @@ -1391,20 +1394,27 @@
                 "us-west-2": {
                   "hostname": "",
                   "signatureVersions": [
    +            },
    +            "caringo": {
    +              "hostname": "",
    +              "signatureVersions": [
    +                "s3",
    +                "s3v4"
    +              ]
               "isRegionalized": true,
               "partitionEndpoint": "us-east-1"
             "sdb": {
               "defaults": {
                 "protocols": [
  9. Now you must restart Jenkins with a JVM "-Xbootclasspath/a" option to make it use your new endpoints.json. You can do this by setting this environment variable globally or for the jenkins user, or by adding it to your Jenkins startup script: 
    JAVA_OPTS="-Xms2g -Xmx2g -Xbootclasspath/a:/var/jenkins_home/caringo-s3-override"
  10. Now go to your Jenkins Job and Configure. Choose "Add post-build action" at the bottom and select "Publish artifacts to S3 Bucket".
    1. Choose the "caringo" profile you created in previous step.
    2. Configure the "Files to upload" ("**" for all) and specify the destination bucket (e.g. "jenkins-artifacts") and choose the "caringo" Bucket Region.
    3. Check Manage Artifacts so that links are generated on the Jenkins build results page.
    4. Warning: these links will use bucket-in-host style url's, which your browser might warn about if it does not match your server certificate. Wildcard certificates only match a single level, so try using a certificate signed for "*" in addition to "".
    5. Add Metadata tags to later identify these artifacts. You can use Jenkins macros / environment variables as the value. This metadata can be used to generate Content UI Collections of artifacts related to a particular branch or build status.

An example S3 Publish action will be added to this KB soon, please let us know if you have any problems using this or suggestions for improving this KB.

© DataCore Software Corporation. · · All rights reserved.