Deploy a Stack with HCP Terraform
HCP Terraform Stacks help you manage complex infrastructure deployments across multiple environments by providing a different model of infrastructure management from Terraform workspaces. Instead of loosely coupled workspaces, a Stack consists of a set of components that are organized into one or more deployments, each of which HCP Terraform will provision as a unit. Each of the deployments in your Stack will provision the same Terraform configuration, defined by your Stack’s components. A component is a Terraform module, which you can customize with input variables. Stacks allow you to manage the lifecycle of each of your deployments independently, since changes to one deployment in your Stack do not affect the others. Each deployment in a Stack represents a set of infrastructure that works together, such as a dev, test, or production environment. HCP Terraform will roll out changes to each deployment at a time, and allows you to track changes across your environments.
When you manage infrastructure projects with Terraform, the configuration in a single workspace tends to become more complex as it grows with the needs of your project. HCP Terraform allows you to split your infrastructure projects into multiple workspaces, each responsible for a portion of your infrastructure. HCP Terraform also includes features to allow you to automate deployments across multiple workspaces, such as run triggers. While this limits the impact of changes and allows you to flexibly define deployment rules for your infrastructure projects, it can be difficult to track changes across multiple workspaces and get a clear picture of the state of your project across multiple environments.
In this tutorial, you will provision an HCP Terraform Stack consisting of two deployments of an AWS Lambda function and related resources, organized into three logical components. You will use HCP Terraform to deploy this infrastructure to both environments, and then add and deploy a third environment. You will review the status of each deployment in HCP Terraform, learn how Stacks manages changes between deployments, and then remove all three deployments.
Prerequisites
This tutorial assumes that you are familiar with the Terraform and HCP Terraform workflows. If you are new to Terraform, complete the Get Started collection first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials first.
In order to complete this tutorial, you will need the following:
Create example repository
Navigate to the template
repository
for this tutorial. Click the Use this template button and select Create a
new repository. Choose a GitHub account to create the repository in and name
the new repository learn-terraform-stacks-deploy
. Leave the rest of the
settings at their default values.
In your terminal, clone your example repository, replacing USER
with your own
GitHub username.
$ git clone https://github.com/USER/learn-terraform-stacks-deploy
Change to the repository directory.
$ cd learn-terraform-stacks-deploy
Review components and deployment
Explore the example configuration to review how this Terraform Stack's configuration is organized.
.├── LICENSE├── README.md├── api-gateway│ ├── main.tf│ ├── outputs.tf│ └── variables.tf├── lambda│ ├── hello-world│ │ └── hello.rb│ ├── main.tf│ ├── outputs.tf│ └── variables.tf├── s3│ ├── main.tf│ └── outputs.tf├── deployments.tfdeploy.hcl├── components.tfstack.hcl├── providers.tfstack.hcl└── variables.tfstack.hcl
In addition to the licensing-related files and README, the example repository
has three directories containing Terraform modules, api-gateway
, lambda
, and
s3
. The Terraform configuration in these directories define the components
that will make up your Stack. The repository also includes two file types
exclusive to configuring Stacks, a deployments file named
deployments.tfdeploy.hcl
, and three Stack configuration files with the
extension .tfstack.hcl
. These configuration files replace a Terraform
workspace's root module and serve as the blueprint for the infrastructure your
Stack will manage.
A Terraform Stack is made up of one or more components, each sourced from a
Terraform module and configured with inputs and the module’s specified
providers. HCP Terraform will provision all of these components for each of the
deployments in your Stack. Each deployment block defined in tfdeploy.hcl
files
represents an independent deployment of this Stack’s infrastructure.
As with Terraform configuration files, HCP Terraform processes all of the blocks
in all of the tfstack.hcl
and tfdeploy.hcl
files in your Stack's root
directory in dependency order, so you can organize your Stacks configuration
into multiple files just like Terraform configuration.
Review Stack components
Open the providers.tfstack.hcl
file. This file contains the provider
configuration for your Stack.
providers.tfstack.hcl
required_providers { aws = { source = "hashicorp/aws" version = "~> 5.7.0" }## ...provider "aws" "configurations" { for_each = var.regions config { region = each.value assume_role_with_web_identity { role_arn = var.role_arn web_identity_token = var.identity_token } default_tags { tags = var.default_tags } }}provider "random" "this" {}provider "archive" "this" {}provider "local" "this" {}
The required_providers
block defines the providers used in this configuration,
and uses a syntax similar to the required_providers
block nested inside the
terraform
block in Terraform configuration.
This configuration includes provider
blocks that configure each provider.
Unlike Terraform configuration, Stacks provider blocks include a label, allowing
you to configure multiple providers of each type if needed. The configuration
also includes a for_each
meta argument so that Stacks will use a separate AWS
provider configuration for each region defined in the regions
variable. In
configuration, you would refer to the AWS provider for the us-east-1
region as
provider.aws.configurations[“us-east-1"]
.
The assume_role_with_web_identity
block configures provider authentication to
use a JWT identity token managed by trust relationship that you will provision
later in this tutorial. Each of your deployments will pass the ARN of your trust
relationship’s IAM role and a corresponding JWT identity token generated by AWS
each time you plan or apply your Stack. This allows each of your deployments to
use their own role, which you can configure with the specific permissions
required by your project.
Next, review the components.tfstack.hcl
file. This file includes configuration
for three components: s3
, lambda
, and api_gateway
.
Review the configuration for the lambda
component.
components.tfstack.hcl
## ...component "lambda" { for_each = var.regions source = "./lambda" inputs = { region = var.regions bucket_id = component.s3[each.value].bucket_id } providers = { aws = provider.aws.configurations[each.value] archive = provider.archive.this local = provider.local.this random = provider.random.this }}## ...
A Stack component includes a Terraform module as a source, input arguments for the module, and the providers that Terraform will use to provision your infrastructure. Each component can also include output values, and components can refer to the output values from other components.
The components in the example configuration use the for_each
meta-argument to
deploy their Terraform module in each of the regions you configure for the
current deployment. Notice that in the provider
block, the AWS provider is
configured to use the correct provider for the given region with
provider.aws.configurations[each.value]
.
The lambda component is configured to use the bucket_id
output value from the
corresponding s3
component for each region it is deployed in. The
configuration for each component is found in the subdirectory referenced by the
source
argument. You can also use modules from external sources such as the
Terraform Registry with the same syntax as for Terraform module
blocks. Since
your Stack will configure the appropriate providers for each of your
deployments, you cannot include provider
blocks in modules you intend to use
with Stacks.
Review deployments
Each deployment represents an instance of all of the components in your Stack,
which you will customize with the inputs
argument. You can use multiple
deployments to manage self-contained instances of your Stack’s infrastructure,
such as development, test, and production environments, or multiple instances of
the same infrastructure distributed across multiple cloud regions.
Open the deployments.tfdeploy.hcl
file and review the two deployments for this
example.
deployments.tfdeploy.hcl
deployment "development" { inputs = { regions = ["us-east-1"] role_arn = "<YOUR_ROLE_ARN>" identity_token = identity_token.aws.jwt default_tags = { stacks-preview-example = "lambda-component-expansion-stack" } }}deployment "production" { inputs = { regions = ["us-east-1", "us-west-1"] role_arn = "<YOUR_ROLE_ARN>" identity_token = identity_token.aws.jwt default_tags = { stacks-preview-example = "lambda-component-expansion-stack" } }}
This file includes two deployments, one for development, and a second for
production. Each deployment block represents an instance of the configuration
defined in the Stack, configured with the given inputs. Your development
deployment is configured for a single region, while your production deployment
is configured in two regions.
Fork identity token repository
Your Stack will use a JWT identity token to authenticate the AWS provider in each region. To do so, you must first establish an OIDC trust relationship between your AWS account and HCP Terraform, and create an AWS role with the appropriate permissions to create and manage your Stack.
Navigate to the template
repository
for creating an identity token. Click the Use this template button and
select Create a new repository. Choose a GitHub account to create the
repository in and name the new repository
learn-terraform-stacks-identity-tokens
. Leave the rest of the settings at
their default values.
Create a project
Create an HCP Terraform project for your identity token workspace and Stack.
To do so, first log in to HCP Terraform, and select the organization you wish to use for this tutorial.
Next, ensure that Stacks is enabled for your organization by navigating to
Settings > General. Ensure that the box next to Stacks
is checked, and
click the Update organization button.
Then navigate to Projects, click the + New Project button, name your
project Learn Terraform Stacks deployments
, and click the Create button to
create it.
Next, ensure that your AWS credentials variable set is configured for your project. Navigate back to < Projects, then to your organization’s **Settings
Variable sets page, and select your AWS credentials variable set. Under Variable set scope, select Apply to specific projects and workspaces, and add your
Learn Terraform Stacks deployments
project to the list under Apply to projects. Scroll to the bottom of the page and click the Save variable set** button to apply it to your new project.
Provision and set identity token
Next, provision the AWS identity token and role that HCP Terraform uses to authenticate with AWS when it deploys your Stack.
Navigate to Projects and select your Learn Terraform Stacks deployments
project. Create a workspace to provision your project's identity tokens:
- On the project overview page, select New > Workspace.
- On the next screen, select Version Control Workflow and select your GitHub account.
- On the Choose a repository page, select the
learn-terraform-stacks-identity-tokens
repository you created in the previous step. - On the next page, select Advanced options and enter
aws/
in the Terraform Working Directory field. - Scroll to the bottom of the page and click the Create button to create your identity token workspace.
Once you create the workspace, HCP Terraform loads the configuration for your
identity token. This process may take a few seconds to complete. Once it does,
HCP Terraform prompts you to enter values for your organization and project.
Enter your organization name and the project name for this tutorial, Learn
Terraform Stacks deployments
and click the Save variables button.
Next, click the Start new plan button, and then Start, to plan your changes. HCP Terraform plans your changes, and prompts you to apply them. Once the plan is complete, click the Confirm and apply button to create your OpenID provider, policy, and role.
Make a copy of the role_arn
output value. You will use it in the next step.
Set role ARN
Open the deployments.tfdeploy.hcl
file in the learn-terraform-stacks-deploy
repository on your local machine and replace the
<YOUR_ROLE_ARN>
for both deployments with the role_arn
output value from
your identity token workspace.
deployments.tfdeploy.hcl
deployment "development" { inputs = { regions = ["us-east-1"] role_arn = "<YOUR_ROLE_ARN>" identity_token = identity_token.aws.jwt default_tags = { stacks-preview-example = "lambda-component-expansion-stack" } }}deployment "production" { inputs = { regions = ["us-east-1", "us-west-1"] role_arn = "<YOUR_ROLE_ARN>" identity_token = identity_token.aws.jwt default_tags = { stacks-preview-example = "lambda-component-expansion-stack" } }}
Commit the role ARN to your git repository.
$ git add deployments.tfdeploy.hcl && git commit -m "Configure role ARN"
Push the change to Github
$ git push
Note
For this tutorial, you configured the same role ARN for each deployment. In a real-world setting, you should use a different role for each deployment, with appropriately scoped permissions for your project.
Create and deploy Stack in HCP Terraform
Now that you have configured your Stack with the ARN of your AWS role, create and deploy your Stack.
In HCP Terraform, navigate to your “Learn Terraform Stacks deployments" project, and select New > and click Stack.
On the Connect to a version control provider page, select your GitHub
account. Then, choose the repository containing the example Stack for this
tutorial, learn-terraform-stacks-deploy
. On the next page, leave your Stack
name the same as your repository name, and click Create Stack to create it.
HCP Terraform will load your Stack configuration from your VCS repository. This process may take a few moments to complete.
Provision infrastructure
Once HCP Terraform loads your configuration, it plans your changes. HCP Terraform plans each deployment separately, and you can choose when to apply each plan. Under Deployments rollout, HCP Terraform lists your development and production deployments, as well as their current status.
Select your development
deployment to review your deployments status. HCP
Terraform created a plan for your deployment. Select it to review the changes to
be applied. Click the Approve plan button to apply it now, which may take a
few minutes. Once the plan is complete, you can review the resources that HCP
Terraform will create, and apply the plan.
After a few minutes, HCP Terraform finishes applying your development
deployment. Review the results of the deployment, which created a Lambda
function and related resources in the us-east-1
region. You can verify the
function by pasting the hello_url
output value into your browser’s navigation
bar. The lambda function will respond with “Hello, World!".
Navigate to your production
deployment and apply it as well. Once HCP
Terraform has finished deploying your infrastructure, your Stack's status will
change to Rolled out. You can manage each deployment in a Stack separately,
update your Stack's configuration, and roll those changes out one environment at
a time.
Add a new deployment
Now, you will provision a new test deployment of your Stack.
Open the deployments.tfdeploy.hcl
file in your editor, and add a new
deployment block to represent a test environment. This deployment uses the same
configuration as your other deployments. Replace <YOUR_ROLE_ARN>
with the same
output value you used for your other deployments.
deployments.tfdeploy.hcl
deployment "test" { inputs = { regions = ["us-east-1", "us-west-1"] role_arn = "<YOUR_ROLE_ARN>" identity_token = identity_token.aws.jwt default_tags = { stacks-preview-example = "lambda-component-expansion-stack" } }}
Commit and push this change to your GitHub repository.
First, add the change.
$ git add deployments.tfdeploy.hcl
Next, commit it.
$ git commit -m "Add test deployment"
Finally, push the change to GitHub.
$ git push
Return to your Stack in HCP Terraform, navigate to your Stack's Deployments page. Once HCP Terraform loads your configuration change, it will add your new deployment. Review the plan and apply it as you did the others.
Destroy infrastructure
Now you have used a Terraform Stack to deploy three independent environments. features.
Before finishing this tutorial, destroy your infrastructure.
Navigate to your Stack's Deployments page, and select the development
deployment. Navigate to the Destruction and Deletion page and click the
Create destroy plan button to create a destroy plan. Once the destroy plan
is complete, approve it to remove your resources.
Next, repeat this process with the test
and production
deployments.
Then, remove your Stack by navigating back to your Stack and selecting your
Stack's Destruction and Deletion page, and clicking the Force delete Stack
learn-terraform-stacks-deploy button. Confirm the action, and click the
Delete
button.
Finally, navigate to your learn-terraform-stacks-identity-token
workspace’s
Settings > Destruction and Deletion page, and follow the prompts to destroy
your identity token infrastructure and delete the workspace. You can also remove
your project by navigating to its settings page and following the steps to
delete it.
Next steps
In this tutorial, you learned how to deploy a Stack with HCP Terraform across multiple environments. You also learned how Stacks support deploying the same configuration across multiple regions, and deployed a third instance of your Stack as a test environment. In addition to allowing you to define any number of environments in a single configuration, Terraform Stacks include powerful orchestration and workflow features.
- Read the Terraform Stacks documentation for more details on Stacks features and workflow.
- Read the Terraform Stacks, explained blog post.
- Learn how to use Stacks deferred actions to manage Kubernetes workloads by following the Manage Kubernetes workloads with Stacks tutorial.