Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node groups submodule #650

Merged

Conversation

dpiddockcmp
Copy link
Contributor

@dpiddockcmp dpiddockcmp commented Dec 27, 2019

PR o'clock

Description

Move node_group resources to a submodule for code isolation. As generally discussed and bounced about in #635.

This also changes the node_groups variable to be a map of maps and associated resources use for_each instead of count. This was more complicated than expected, thanks Terraform. Idea originally came from a comment on a different PR: #644 (comment)

I think this is feature compatible with the current setup. Still some improvements to do.

Checklist

WIP

Related

Fixes: #633

@dpiddockcmp dpiddockcmp changed the title Node groups submodule POC WIP: Node groups submodule POC Dec 27, 2019
@max-rocket-internet
Copy link
Contributor

Nice! I think we create a new release when this is done 🙂

@dpiddockcmp dpiddockcmp changed the title WIP: Node groups submodule POC Node groups submodule POC Jan 2, 2020
@dpiddockcmp
Copy link
Contributor Author

I think it's ready for others to test and review.

Solved the race condition with kubernetes_config_map with a little use of a null_data_source.

For people using node_groups from the master branch it would be a good idea to target apply the node_group submodule first before a full apply that will delete the original node_group resources.

terraform apply -target=module.eks.module.node_groups
terraform apply

Copy link
Contributor

@max-rocket-internet max-rocket-internet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested it in my setup without MNGs, LGTM!

@max-rocket-internet
Copy link
Contributor

Question: A user can still use the submodule completely separately right? Like to allow using mixed versions of the main and MNG module? Like using the basic example but add this:

module "mng1" {
  source                 = "git://github.com/terraform-aws-modules/terraform-aws-eks.git/modules/node_groups?ref=v999"
  create_eks             = true
  cluster_name           = module.eks.outputs.cluster_id
  cluster_version        = module.eks.outputs.cluster_version
  default_iam_role_arn   = module.eks.outputs.worker_iam_role_arn
  node_groups = {
    example = {
      desired_capacity = 1
      max_capacity     = 10
      min_capacity     = 1
      instance_type = "m5.large"
    }
  }
}

Second question: What do you think about forcing this approach? i.e. no direct connection between the core and mng module, i.e. no variables of variable.node_groups*.

Copy link
Member

@barryib barryib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dpiddockcmp for your awesome work. I just have some question, otherwise, it looks good to me.

modules/node_groups/locals.tf Outdated Show resolved Hide resolved
modules/node_groups/locals.tf Outdated Show resolved Hide resolved
modules/node_groups/node_groups.tf Outdated Show resolved Hide resolved
modules/node_groups/variables.tf Outdated Show resolved Hide resolved
node_groups.tf Show resolved Hide resolved
variables.tf Outdated Show resolved Hide resolved
@dpiddockcmp
Copy link
Contributor Author

Question: A user can still use the submodule completely separately right? Like to allow using mixed versions of the main and MNG module? Like using the basic example but add this:

They could but that might cause fun issues if the interface between the two is changed. My primary aim was to have the module as self contained code for readability rather than public use.

All variables need setting to something, including keys that "fall back" to workers_group_defaults. So you'd end up nearer to:

module "mng1" {
  source                 = "git://github.com/terraform-aws-modules/terraform-aws-eks.git/modules/node_groups?ref=v999"
  create_eks             = true
  cluster_name           = module.eks.cluster_id
  cluster_version        = module.eks.cluster_version
  default_iam_role_arn   = module.eks.worker_iam_role_arn
  tags                   = {}
  node_groups_defaults   = {}
  workers_group_defaults = {}

  node_groups = {
    example = {
      subnets          = module.vpc.private_subnets
      desired_capacity = 1
      max_capacity     = 10
      min_capacity     = 1
      instance_type    = "m5.large"
      key_name         = ""
    }
  }
}

All the node_groups keys could be put in node_groups_defaults if creating more than one group.

Second question: What do you think about forcing this approach? i.e. no direct connection between the core and mng module, i.e. no variables of variable.node_groups*.

It's an option. But we'd have to modify the root module for interaction with the aws-auth ConfigMap. Need to get any additional worker roles into it and document how to deal with the race condition when first creating the cluster. Possibly exposing an output variable from the kubernetes_config_map resource for dependency ordering? Or have a null_data_source that depends on aws-auth be the source for the cluster_name output? Potential for dependency cycle if users not careful.

@dpiddockcmp dpiddockcmp force-pushed the node-groups branch 2 times, most recently from c3e91a4 to f9cbb1b Compare January 4, 2020 19:17
modules/node_groups/node_groups.tf Outdated Show resolved Hide resolved
modules/node_groups/README.md Show resolved Hide resolved
aws_auth.tf Show resolved Hide resolved
Copy link
Member

@barryib barryib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modules/node_groups/node_groups.tf Show resolved Hide resolved
modules/node_groups/node_groups.tf Outdated Show resolved Hide resolved
@barryib
Copy link
Member

barryib commented Jan 7, 2020

@dpiddockcmp please rebase.

@barryib barryib changed the title Node groups submodule POC Node groups submodule Jan 7, 2020
@max-rocket-internet
Copy link
Contributor

OK are we good to merge?

node_groups.tf Outdated Show resolved Hide resolved
node_groups.tf Show resolved Hide resolved
This was referenced Jan 8, 2020
@max-rocket-internet
Copy link
Contributor

I'll merge this now, run some test and then create a new release.

@max-rocket-internet max-rocket-internet merged commit 11147e9 into terraform-aws-modules:master Jan 9, 2020
barryib added a commit that referenced this pull request Jan 15, 2020
* Don't fail on destroy, when provider resource was removed

* Update Changelog

* Node groups submodule (#650)

* WIP Move node_groups to a submodule

* Split the old node_groups file up

* Start moving locals

* Simplify IAM creation logic

* depends_on from the TF docs

* Wire in the variables

* Call module from parent

* Allow to customize the role name. As per workers

* aws_auth ConfigMap for node_groups

* Get the managed_node_groups example to plan

* Get the basic example to plan too

* create_eks = false works

"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.

* Update Changelog

* Update README

* Wire in node_groups_defaults

* Remove node_groups from workers_defaults_defaults

* Synchronize random and node_group defaults

* Error: "name_prefix" cannot be longer than 32

* Update READMEs again

* Fix double destroy

Was producing index errors when running destroy on an empty state.

* Remove duplicate iam_role in node_group

I think this logic works. Needs some testing with an externally created
role.

* Fix index fail if node group manually deleted

* Keep aws_auth template in top module

Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue

* Hack to have node_groups depend on aws_auth etc

The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.

Can't pull the IAM role out of the node_group any more.

* Pull variables via the random_pet to cut logic

No point having the same logic in two different places

* Pass all ForceNew variables through the pet

* Do a deep merge of NG labels and tags

* Update README.. again

* Additional managed node outputs #644

Add change from @TBeijin from PR #644

* Remove unused local

* Use more for_each

* Remove the change when create_eks = false

* Make documentation less confusing

* node_group version user configurable

* Pass through raw output from aws_eks_node_groups

* Merge workers defaults in the locals

This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.

* Fix typo

Co-authored-by: Max Williams <max.williams@deliveryhero.com>

* Update Changelog

* Add public access endpoint CIDRs option (terraform-aws-eks#647) (#673)

* Add public access endpoint CIDRs option (terraform-aws-eks#647)

* Update required provider version to 2.44.0

* Fix formatting in docs

* Re-generate docs with terraform-docs 0.7.0 and bump pre-commit-terraform version (#668)

* re-generate docs with terraform-docs 0.7.0

* bump pre-commit-terraform version

* Release 8.0.0 (#662)

* Release 8.0.0

* Update changelog

* remove 'defauls' node group

* Make curl silent

* Update Changelog

Co-authored-by: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
Co-authored-by: Siddarth Prakash <1428486+sidprak@users.noreply.github.com>
Co-authored-by: Thierno IB. BARRY <ibrahima.br@gmail.com>
@dpiddockcmp dpiddockcmp deleted the node-groups branch March 10, 2020 17:08
@barryib barryib mentioned this pull request May 6, 2020
2 tasks
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 19, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Multiple node groups break if they don't all specify same keys
6 participants