[Terragrunt GitOps - Part 3] Meta triggers

·

6 min read

Introduction

We'll finally delve into some Terraform code to understand how to create Cloud Build triggers, with a focus on meta triggers.

Terragrunt-runner module

Please find the code for the repo here.

The core of this module is Terraform's trigger resource (file in the repo here). As you can see in the for_each, I created 3 triggers in the module - one for apply action, another for plan and last for destroy.

  for_each = {
    apply = {
      invert_regex   = false
      local_var_name = "apply"
    }
    plan = {
      invert_regex   = true
      local_var_name = "plan"
    }
    destroy = {
      local_var_name = "destroy"
    }

The destroy trigger is a bit different from others, as it's manual - the assumption is you plan/apply as a reaction to commits, but destroy manually only.

  dynamic "source_to_build" {
    for_each = contains(["destroy"], each.key) ? ["dummy"] : []

    content {
      repository = var.cb_repository_id
      ref        = "refs/heads/main"
      repo_type  = "UNKNOWN"
    }
  }

The lengthiest part is the one specifying the build config. It's based on nested dynamic blocks.

The question is immediately raised - why choose such a way, and not cloudbuild.yaml file for that? My motivation is as follows:

  • when you use yaml configs, those files have to be present in the repository that triggered this job. If you want to use runners in a variety of use cases (for many terragrunt repos dedicated to different solutions), it's more convenient to have it in such a way

  • you avoid repeating yourself in yaml configs. They don't differ that much between themselves and in my case, you have to make fewer changes to code if you want to modify the build config.

  • It's easier to "inject" variables into Terraform locals than Cloud Build subsitutions.

Tradeoff: my module is less clear to read and understand, and took some time to refine.

How it works: the build config is first dynamically created in the cloudbuild_config module. Then it's sent as output to the "root" of the module (by invoking it here), and finally used in the trigger resource.

output "all_config_orders" {
  description = "List of steps (ordered!) that should be sequentially processed by dynamic block in each trigger definiton file."
  value = {
    apply   = local.order_of_apply_steps
    plan    = local.order_of_plan_steps
    destroy = local.order_of_destroy_steps
  }
}
module "config_locals" {
  source = "./inner_modules/cloudbuild_config"

  access_token_secret_id         = var.access_token_secret_id
  common_project                 = var.common_project
  builder_full_name              = var.builder_full_name
  terragrunt_run_level_directory = var.terragrunt_run_level_directory
}
  build {
    logs_bucket   = module.config_locals.all_config_files[each.value.local_var_name]["logs_bucket"]
    timeout       = lookup(module.config_locals.all_config_files[each.value.local_var_name], "timeout", null)
    substitutions = lookup(module.config_locals.all_config_files[each.value.local_var_name], "substitutions", null)

    ...
  }

How to access private modules?

In the last article of this series, I explained that we needed to create a Github fine-grained personal access token and store it in the GCP's Secret Manager. We'll use that token in the runner module to clone private modules.

To use this token programmatically, we need a few elements:

  • GitHub CLI tool: we'll use that to execute gh auth login --with-token command

  • Access to the token from the Secret Manager

  • Copy of the modified git config files to the root directory.

We make the token available in the build:

  access_token_available_secret = var.access_token_secret_id == "" ? {} : {
    access_token = {
      version_name = "projects/${var.common_project}/secrets/${var.access_token_secret_id}/versions/latest"
      env          = "ACCESS_TOKEN"
    }
  }
    available_secrets = {
      secret_manager = merge(
        local.access_token_available_secret
      )
    }

And then add a Cloud Build step if needed:

  pre_terragrunt_run_steps = var.access_token_secret_id == "" ? {} : {
    token_setup = {
      id         = "gh token setup"
      name       = var.builder_full_name
      secret_env = ["ACCESS_TOKEN"]
      args = [
        "-c",
        <<-EOT
              echo "$$ACCESS_TOKEN" > /root/access_token
              gh auth login --with-token < /root/access_token
              gh auth setup-git

            cp -a /root /root-copy
          EOT
      ]
    }
  }

If we use the token, we have to copy the contents to the root directory:

            if [[ "${var.access_token_secret_id}" != "" ]]
            then
              cp -a /root-copy /root
            fi

I've gone very quickly with that one, but you can refer to the official documentation of Cloud Build, GH CLI in case you have trouble understanding it.

Note. Remember that in the previous article, we made sure that our Docker image running those jobs had GH CLI installed.

Non-trigger resources of the module

The module also creates a few other things. Service accounts and IAM bindings are very important here - we have to attach an SA to the builds.

It also creates GCS buckets for artifacts and logs.

Pre-commit

I added pre-commit files to all of my repos. In order to use this tool, you have to:

  • download the binary to your machine

  • execute pre-commit install in the root of the repository

Now, with every commit, pre-commit will be invoked to check/lint the code and generate a README section.

Create meta triggers

I briefly explained the runner module. Now let's use it to create meta triggers.

As mentioned in the first article of the series, meta triggers are excluded from GitOps and have to be "bootstrapped" manually. I use a very simple approach - in the examples directory of the runner module, I invoke the module and run terraform apply locally to create the meta triggers.

terraform {
  required_version = ">= 1.5.0"
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = ">= 5.0.0"
    }
  }
}

module "meta_trigger" {
  source = "./.."

  resource_name_prefix           = "meta-"
  resource_name_suffix           = "-bootstrap"
  project_id                     = "<your-project-id>"
  included_files_list            = ["envs/*", "envs/onboard/**"]
  builder_full_name              = "<your-builder-name>"
  terragrunt_run_level_directory = "envs/onboard"
  solution_folder                = "<folder-with-the-solution>"
  meta_sa_folder_roles_list = [
    "roles/editor",
    "roles/secretmanager.secretAccessor",
    "roles/resourcemanager.projectIamAdmin"
  ]
  cloudbuild_gcs_location = "europe-west1"
  trigger_location        = "europe-west1"
  trigger_purpose         = "meta"
  common_project          = "<common-project-id>"
  cb_repository_id        = "<repository-id>"
  access_token_secret_id  = "gh-access-token"
}

A few comments:

  • project_id and common_project will be the same value in this case (because the meta triggers are deployed in the "common" project).

  • Provide the path to your builder image, for example "europe-west1-docker.pkg.dev/prj-terragrunt-common/terragrunt-docker-images/terragrunt-image:v1.1"

  • trigger_purpose must be "meta".

  • In cb_repository_id provide the CB connection name, for example "projects/prj-terragrunt-common/locations/europe-west1/connections/terragrunt-connection/repositories/piotriwn-terragrunt-example-envs"

  • access_token_secret_id is an ID of the secret containing Github's fine-grained PAT.

  • solution_folder is a folder number that hosts the SP projects. You may want to adjust this to your needs. An example is "folders/90536407307" .''

  • terragrunt_run_level_directory is very important, as it specifies the level on which terragrunt commands are executed. In our design, it's envs/onboard. You have to make it consistent with the contents of the terragrunt-example-envs repo.

When you're ready to go, navigate to this directory, run terraform apply and let's go!

Apply complete! Resources: 10 added, 0 changed, 0 destroyed.

Afterwards, you may examine the result in the console:

Conclusion

In this article, I explained some decisions undertaken when writing a runner module. I went (rather quickly) through some parts of the code I've found particularly interesting or difficult. Then we deployed meta triggers in the common project.

Now we're ready to start onboarding customers to our solution. We'll do that in the next article.