Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node groups submodule #650

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
b84b346
WIP Move node_groups to a submodule
dpiddock Dec 27, 2019
ffcf54b
Split the old node_groups file up
dpiddock Dec 27, 2019
384dbed
Start moving locals
dpiddock Dec 27, 2019
07ad28e
Simplify IAM creation logic
dpiddock Dec 27, 2019
c880766
depends_on from the TF docs
dpiddock Dec 27, 2019
eea2ab4
Wire in the variables
dpiddock Dec 27, 2019
058d26c
Call module from parent
dpiddock Dec 27, 2019
283d50d
Allow to customize the role name. As per workers
dpiddock Dec 27, 2019
11d9008
aws_auth ConfigMap for node_groups
dpiddock Dec 27, 2019
7a8b60a
Get the managed_node_groups example to plan
dpiddock Dec 27, 2019
55195bb
Get the basic example to plan too
dpiddock Dec 27, 2019
e8c60da
create_eks = false works
dpiddock Dec 27, 2019
f81984b
Update Changelog
dpiddock Dec 27, 2019
55ab21f
Update README
dpiddock Dec 27, 2019
c3a27b3
Wire in node_groups_defaults
dpiddock Dec 29, 2019
ada2634
Remove node_groups from workers_defaults_defaults
dpiddock Dec 29, 2019
d9f795a
Synchronize random and node_group defaults
dpiddock Dec 29, 2019
2e45ba7
Error: "name_prefix" cannot be longer than 32
dpiddock Dec 29, 2019
2bd52e9
Update READMEs again
dpiddock Dec 29, 2019
fa598fd
Fix double destroy
dpiddock Dec 29, 2019
d628605
Remove duplicate iam_role in node_group
dpiddock Dec 29, 2019
812daf3
Fix index fail if node group manually deleted
dpiddock Dec 29, 2019
6560f21
Keep aws_auth template in top module
dpiddock Jan 2, 2020
7c8aee7
Hack to have node_groups depend on aws_auth etc
dpiddock Jan 2, 2020
e36342e
Pull variables via the random_pet to cut logic
dpiddock Jan 2, 2020
12c92e2
Pass all ForceNew variables through the pet
dpiddock Jan 2, 2020
47c8635
Do a deep merge of NG labels and tags
dpiddock Jan 2, 2020
4f64d9c
Update README.. again
dpiddock Jan 2, 2020
78c9272
Additional managed node outputs #644
dpiddock Jan 2, 2020
5272127
Remove unused local
dpiddock Jan 3, 2020
34bd697
Use more for_each
dpiddock Jan 4, 2020
7f0a2c6
Remove the change when create_eks = false
dpiddock Jan 4, 2020
0d322da
Make documentation less confusing
dpiddock Jan 4, 2020
0bab29e
node_group version user configurable
dpiddock Jan 6, 2020
3c77ccf
Pass through raw output from aws_eks_node_groups
dpiddock Jan 6, 2020
48f0c03
Merge workers defaults in the locals
dpiddock Jan 7, 2020
59813e6
Merge branch 'master' into node-groups
max-rocket-internet Jan 7, 2020
c996076
Fix typo
dpiddock Jan 7, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ project adheres to [Semantic Versioning](http://semver.org/).
- Adding node group iam role arns to outputs. (by @mukgupta)
- Added the OIDC Provider ARN to outputs. (by @eytanhanig)
- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi)
- Move `eks_node_group` resources to a submodule (by @dpiddockcmp)
- Add complex output `node_groups` (by @TBeijen)

#### Important notes

Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,8 @@ MIT Licensed. See [LICENSE](/~https://github.com/terraform-aws-modules/terraform-a
| map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no |
| map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no |
| map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no |
| node\_groups | A list of maps defining node group configurations to be defined using AWS EKS Managed Node Groups. See workers_group_defaults for valid keys. | any | `[]` | no |
| node\_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | any | `{}` | no |
| node\_groups\_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | any | `{}` | no |
| permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no |
| subnets | A list of subnets to place the EKS cluster and workers within. | list(string) | n/a | yes |
| tags | A map of tags to add to all resources. | map(string) | `{}` | no |
Expand Down Expand Up @@ -218,7 +219,7 @@ MIT Licensed. See [LICENSE](/~https://github.com/terraform-aws-modules/terraform-a
| config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. |
| kubeconfig | kubectl config file contents for this EKS cluster. |
| kubeconfig\_filename | The filename of the generated kubectl config. |
| node\_groups\_iam\_role\_arns | IAM role ARNs for EKS node groups |
| node\_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys |
| oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. |
| worker\_autoscaling\_policy\_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` |
| worker\_autoscaling\_policy\_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` |
Expand Down
7 changes: 2 additions & 5 deletions aws_auth.tf
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,10 @@ data "template_file" "worker_role_arns" {
}

data "template_file" "node_group_arns" {
count = var.create_eks ? local.worker_group_managed_node_group_count : 0
count = var.create_eks ? length(module.node_groups.aws_auth_roles) : 0
dpiddockcmp marked this conversation as resolved.
Show resolved Hide resolved
template = file("${path.module}/templates/worker-role.tpl")

vars = {
worker_role_arn = lookup(var.node_groups[count.index], "iam_role_arn", aws_iam_role.node_groups[0].arn)
platform = "linux" # Hardcoded because the EKS API currently only supports linux for managed node groups
}
vars = module.node_groups.aws_auth_roles[count.index]
}

resource "kubernetes_config_map" "aws_auth" {
Expand Down
26 changes: 14 additions & 12 deletions examples/managed_node_groups/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -92,27 +92,29 @@ module "eks" {

vpc_id = module.vpc.vpc_id

node_groups = [
{
name = "example"
node_groups_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
}

node_group_desired_capacity = 1
node_group_max_capacity = 10
node_group_min_capacity = 1
node_groups = {
example = {
desired_capacity = 1
max_capacity = 10
min_capacity = 1

instance_type = "m5.large"
node_group_k8s_labels = {
k8s_labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
node_group_additional_tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
additional_tags = {
ExtraTag = "example"
}
}
]
defaults = {}
}

map_roles = var.map_roles
map_users = var.map_users
Expand Down
4 changes: 4 additions & 0 deletions examples/managed_node_groups/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,7 @@ output "region" {
value = var.region
}

output "node_groups" {
description = "Outputs from node groups"
value = module.eks.node_groups
}
17 changes: 2 additions & 15 deletions local.tf
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,8 @@ locals {
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name

worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
worker_group_managed_node_group_count = length(var.node_groups)
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)

default_ami_id_linux = data.aws_ami.eks_worker.id
default_ami_id_windows = data.aws_ami.eks_worker_windows.id
Expand Down Expand Up @@ -80,15 +79,6 @@ locals {
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.
spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
ami_type = "AL2_x86_64" # AMI Type to use for the Managed Node Groups. Can be either: AL2_x86_64 or AL2_x86_64_GPU
ami_release_version = "" # AMI Release Version of the Managed Node Groups
source_security_group_id = [] # Source Security Group IDs to allow SSH Access to the Nodes. NOTE: IF LEFT BLANK, AND A KEY IS SPECIFIED, THE SSH PORT WILL BE OPENNED TO THE WORLD
node_group_k8s_labels = {} # Kubernetes Labels to apply to the nodes within the Managed Node Group
node_group_desired_capacity = 1 # Desired capacity of the Node Group
node_group_min_capacity = 1 # Min capacity of the Node Group (Minimum value allowed is 1)
node_group_max_capacity = 3 # Max capacity of the Node Group
node_group_iam_role_arn = "" # IAM role to use for Managed Node Groups instead of default one created by the automation
node_group_additional_tags = {} # Additional tags to be applied to the Node Groups
}

workers_group_defaults = merge(
Expand Down Expand Up @@ -133,7 +123,4 @@ locals {
"t2.small",
"t2.xlarge"
]

node_groups = { for node_group in var.node_groups : node_group["name"] => node_group }

}
55 changes: 55 additions & 0 deletions modules/node_groups/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# eks `node_groups` submodule

Helper submodule to create and manage resources related to `eks_node_groups`.

## Assumptions
* Designed for use by the parent module and not directly by end users

## Node Groups' IAM Role
The role ARN specified in `var.default_iam_role_arn` will be used by default. In a simple configuration this will be the worker role created by the parent module.

`iam_role_arn` must be specified in either `var.node_groups_defaults` or `var.node_groups` if the default parent IAM role is not being created for whatever reason, for example if `manage_worker_iam_resources` is set to false in the parent.

## `node_groups` and `node_groups_defaults` keys
`node_groups_defaults` is a map that can take the below keys. Values will be used if not specified in individual node groups.

`node_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_eks_node_group` name. Inner map can take the below values.
dpiddockcmp marked this conversation as resolved.
Show resolved Hide resolved

| Name | Description | Type | If unset |
|------|-------------|:----:|:-----:|
| additional\_tags | Additional tags to apply to node group | map(string) | Only `var.tags` applied |
dpiddockcmp marked this conversation as resolved.
Show resolved Hide resolved
| ami\_release\_version | AMI version of workers | string | Provider default behavior |
| ami\_type | AMI Type. See Terraform or AWS docs | string | Provider default behavior |
| desired\_capacity | Desired number of workers | number | `var.workers_group_defaults[asg_desired_capacity]` |
| disk\_size | Workers' disk size | number | Provider default behavior |
| iam\_role\_arn | IAM role ARN for workers | string | `var.default_iam_role_arn` |
| instance\_type | Workers' instance type | string | `var.workers_group_defaults[instance_type]` |
| k8s\_labels | Kubernetes labels | map(string) | No labels applied |
| key\_name | Key name for workers. Set to empty string to disable remote access | string | `var.workers_group_defaults[key_name]` |
| max\_capacity | Max number of workers | number | `var.workers_group_defaults[asg_max_size]` |
| min\_capacity | Min number of workers | number | `var.workers_group_defaults[asg_min_size]` |
| source\_security\_group\_ids | Source security groups for remote access to workers | list(string) | If key\_name is specified: THE REMOTE ACCESS WILL BE OPENED TO THE WORLD |
| subnets | Subnets to contain workers | list(string) | `var.workers_group_defaults[subnets]` |
| version | Kubernetes version | string | Provider default behavior |

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|:----:|:-----:|:-----:|
| cluster\_name | Name of parent cluster | string | n/a | yes |
| create\_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no |
| default\_iam\_role\_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes |
| node\_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no |
| node\_groups\_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes |
| tags | A map of tags to add to all resources | map(string) | n/a | yes |
| workers\_group\_defaults | Workers group defaults from parent | any | n/a | yes |

## Outputs

| Name | Description |
|------|-------------|
| aws\_auth\_roles | Roles for use in aws-auth ConfigMap |
| node\_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values |

<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
16 changes: 16 additions & 0 deletions modules/node_groups/locals.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
locals {
# Merge defaults and per-group values to make code cleaner
node_groups_expanded = { for k, v in var.node_groups : k => merge(
{
desired_capacity = var.workers_group_defaults["asg_desired_capacity"]
iam_role_arn = var.default_iam_role_arn
instance_type = var.workers_group_defaults["instance_type"]
key_name = var.workers_group_defaults["key_name"]
max_capacity = var.workers_group_defaults["asg_max_size"]
min_capacity = var.workers_group_defaults["asg_min_size"]
subnets = var.workers_group_defaults["subnets"]
},
var.node_groups_defaults,
v,
) if var.create_eks }
}
49 changes: 49 additions & 0 deletions modules/node_groups/node_groups.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
resource "aws_eks_node_group" "workers" {
for_each = local.node_groups_expanded

node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id])

cluster_name = var.cluster_name
node_role_arn = each.value["iam_role_arn"]
subnet_ids = each.value["subnets"]

scaling_config {
desired_size = each.value["desired_capacity"]
max_size = each.value["max_capacity"]
min_size = each.value["min_capacity"]
}

ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "disk_size", null)
instance_types = [each.value["instance_type"]]
release_version = lookup(each.value, "ami_release_version", null)

dynamic "remote_access" {
for_each = each.value["key_name"] != "" ? [{
ec2_ssh_key = each.value["key_name"]
source_security_group_ids = lookup(each.value, "source_security_group_ids", [])
}] : []

content {
ec2_ssh_key = remote_access.value["ec2_ssh_key"]
source_security_group_ids = remote_access.value["source_security_group_ids"]
}
}

version = lookup(each.value, "version", null)

labels = merge(
barryib marked this conversation as resolved.
Show resolved Hide resolved
lookup(var.node_groups_defaults, "k8s_labels", {}),
lookup(var.node_groups[each.key], "k8s_labels", {})
)

tags = merge(
var.tags,
lookup(var.node_groups_defaults, "additional_tags", {}),
lookup(var.node_groups[each.key], "additional_tags", {}),
)

lifecycle {
create_before_destroy = true
}
}
14 changes: 14 additions & 0 deletions modules/node_groups/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
output "node_groups" {
description = "Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values"
value = aws_eks_node_group.workers
}

output "aws_auth_roles" {
description = "Roles for use in aws-auth ConfigMap"
value = [
dpiddockcmp marked this conversation as resolved.
Show resolved Hide resolved
for k, v in local.node_groups_expanded : {
worker_role_arn = lookup(v, "iam_role_arn", var.default_iam_role_arn)
platform = "linux"
}
]
}
21 changes: 21 additions & 0 deletions modules/node_groups/random.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
resource "random_pet" "node_groups" {
for_each = local.node_groups_expanded

separator = "-"
length = 2

keepers = {
ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "disk_size", null)
instance_type = each.value["instance_type"]
iam_role_arn = each.value["iam_role_arn"]

key_name = each.value["key_name"]

source_security_group_ids = join("|", compact(
lookup(each.value, "source_security_group_ids", [])
))
subnet_ids = join("|", each.value["subnets"])
node_group_name = join("-", [var.cluster_name, each.key])
}
}
36 changes: 36 additions & 0 deletions modules/node_groups/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
variable "create_eks" {
description = "Controls if EKS resources should be created (it affects almost all resources)"
type = bool
default = true
}

variable "cluster_name" {
description = "Name of parent cluster"
type = string
}

variable "default_iam_role_arn" {
description = "ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults`"
type = string
}

variable "workers_group_defaults" {
description = "Workers group defaults from parent"
type = any
}

variable "tags" {
description = "A map of tags to add to all resources"
type = map(string)
}

variable "node_groups_defaults" {
description = "map of maps of node groups to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details"
type = any
}

variable "node_groups" {
description = "Map of maps of `eks_node_groups` to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details"
type = any
default = {}
}
Loading