-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add unmanaged infrastructure proposal
- Loading branch information
Showing
1 changed file
with
219 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,219 @@ | ||
--- | ||
title: Externally Managed cluster infastructure | ||
authors: | ||
- "@enxebre" | ||
- "@joelspeed" | ||
- "@alexander-demichev" | ||
reviewers: | ||
- "@vincepri" | ||
- "@randomvariable" | ||
- "@CecileRobertMichon" | ||
- "@yastij" | ||
|
||
creation-date: 2021-02-03 | ||
last-updated: 2021-02-12 | ||
status: implementable | ||
see-also: | ||
replaces: | ||
superseded-by: | ||
--- | ||
|
||
# Externally Managed cluster infrastucture | ||
|
||
## Table of Contents | ||
|
||
A table of contents is helpful for quickly jumping to sections of a proposal and for highlighting | ||
any additional information provided beyond the standard proposal template. | ||
[Tools for generating](/~https://github.com/ekalinin/github-markdown-toc) a table of contents from markdown are available. | ||
|
||
- [Title](#title) | ||
- [Table of Contents](#table-of-contents) | ||
- [Glossary](#glossary) | ||
- [Summary](#summary) | ||
- [Motivation](#motivation) | ||
- [Goals](#goals) | ||
- [Non-Goals/Future Work](#non-goalsfuture-work) | ||
- [Proposal](#proposal) | ||
- [User Stories](#user-stories) | ||
- [Story 1](#story-1) | ||
- [Story 2](#story-2) | ||
- [Requirements (Optional)](#requirements-optional) | ||
- [Functional Requirements](#functional-requirements) | ||
- [FR1](#fr1) | ||
- [FR2](#fr2) | ||
- [Non-Functional Requirements](#non-functional-requirements) | ||
- [NFR1](#nfr1) | ||
- [NFR2](#nfr2) | ||
- [Implementation Details/Notes/Constraints](#implementation-detailsnotesconstraints) | ||
- [Security Model](#security-model) | ||
- [Risks and Mitigations](#risks-and-mitigations) | ||
- [Alternatives](#alternatives) | ||
- [Upgrade Strategy](#upgrade-strategy) | ||
- [Additional Details](#additional-details) | ||
- [Test Plan [optional]](#test-plan-optional) | ||
- [Graduation Criteria [optional]](#graduation-criteria-optional) | ||
- [Version Skew Strategy [optional]](#version-skew-strategy-optional) | ||
- [Implementation History](#implementation-history) | ||
|
||
## Glossary | ||
|
||
Refer to the [Cluster API Book Glossary](https://cluster-api.sigs.k8s.io/reference/glossary.html). | ||
|
||
### Managed cluster infrastructure | ||
|
||
Cluster infratructure which lifecycle is managed by a provider infraCluster CR. | ||
E.g in AWS: | ||
- Network | ||
- VPC | ||
- Subnets | ||
- Internet gateways | ||
- Nat gateways | ||
- Route tables | ||
- Security groups | ||
- Load balancers | ||
|
||
### Externally managed cluster infrastructure | ||
Cluster infratructure which lifecycle is not managed by CAPI but rather by an external entity. | ||
|
||
## Summary | ||
|
||
This proposal introduces first class support for "externally managed" infrastructure for CAPI providers. | ||
This consolidates the boundaries between CAPI managed (existing pattern) and externally managed ("Bring Your Own") cluster infrastructure. | ||
|
||
## Motivation | ||
|
||
Currently, Cluster API infrastructure providers support an opinionated happy path to create and manage cluster infrastructure lifecycle. | ||
The fundamental use case we want to support is out of tree controllers or tools that can manage this cluster infrastructure. | ||
|
||
For example, this could allow users to create clusters using tooling such as Terraform or Kops and allow them to add CAPI Machine infrastructure as a day 2 operation. | ||
|
||
This will also ease adoption of CAPI in heterogeneous real world environments with restricted privileges and where the provider infrastructure for the cluster needs to be managed out of band. | ||
|
||
### Goals | ||
|
||
- Introduce support for "externally managed" infrastructure consistently across CAPI providers. | ||
- The machine controller must be able to operate and manage machines when the infastructure is "externally managed". | ||
- Reuse existing InfraCluster CRDs in "externally managed" clusters to minimise differences between the two topologies. | ||
|
||
### Non-Goals/Future Work | ||
|
||
- Modify existing managed behaviour. | ||
- Automatically mark InfraCluster resources as ready (this will be up to the external management component initially). | ||
|
||
## Proposal | ||
|
||
A new contract will be defined around an annotation `cluster.x-k8s.io/managed-by: "<name-of-system>"`. | ||
This annotation will identify a system which is providing external management of the infrastructure. | ||
|
||
It is expected that all providers will adhere to the contract being defined by this proposal. | ||
|
||
When this annotation is present on an InfraCluster resource, the InfraCluster controller is expected to ignore the resource and not perform any reconciliation. | ||
Importantly, it will not modify the resource or its status in any way. | ||
|
||
Additionally, the external management system must provide all required fields within the spec of the InfraCluster and must adhere to the CAPI provider contract and set the InfraCluster status to be ready when it is appropriate to do so. | ||
|
||
While an "externally managed" InfraCluster won't reconcile or manage the lifecycle of the cluster infrastructure, CAPI will still be able to create compute nodes within it. | ||
|
||
The machine controller must be able to operate without hard dependencies regardless of the cluster infrastructure being managed or externally managed. | ||
![](https://i.imgur.com/nA61XJt.png) | ||
|
||
### User Stories | ||
|
||
#### Story 1 - Alternate control plane provisioning with user managed infrastructure | ||
As a cluster provider I want to use CAPI in my service offering to orchestrate Kubernetes bootstraping while letting workload cluster operators own their infrastructure lifecycle. | ||
|
||
#### Story 2 - Restricted access to cloud provider APIs | ||
As a cluster operator I want to use CAPI to orchestrate kubernetes bootstraping while restricting the privileges I need to grant for my cloud provider because of organisational cloud security constraints. | ||
|
||
#### Story 3 - Consuming existing cloud infrastructure | ||
As a cluster operator I want to use CAPI to orchestate Kubernetes bootstraping while reusing infrastructure that has already been created in the organisation either by me or another team. | ||
|
||
### Implementation Details/Notes/Constraints | ||
|
||
**Managed** | ||
- It will be default and will preserve existing behaviour. | ||
|
||
**Externally Managed** | ||
|
||
The provider InfraCluster controller will: | ||
- Skip any reconciliation of the resource. | ||
|
||
- Will not update the resource or its status in any way | ||
|
||
The external management system will: | ||
|
||
- Populate all required fields within the InfraCluster spec to allow other CAPI components to continue as normal. | ||
|
||
- When the infrastructure is ready, set the appropriate status as is done by the provider controller today. | ||
|
||
#### Provider implementation changes | ||
|
||
To enable providers to implement the changes required by this contract, we will provide a new `InfraClusterExternallyManaged` predicate. | ||
|
||
This will filter out any InfraCluster resource that has been marked as "externally managed" and will prevent the controller from reconciling the resource. | ||
|
||
### Security Model | ||
|
||
When externally managed, no additional privileges for a cloud provider need to be given to CAPI other than the required to manage machines. | ||
|
||
### Risks and Mitigations | ||
|
||
#### What happens when a user coverts an externally managed InfraCluster to a managed InfraCluster? | ||
|
||
Whether the flag for being managed is an annotation or field on the InfraCluster, there currently is no immutability support for CRD fields or annotations within the Kubernetes API. | ||
|
||
This means that, once a user has created their externally managed InfraCluster, they could at some point, update the annotation/field to make the InfraCluster appear to be managed. | ||
|
||
There is no way to predict what would happen in this scenario. | ||
The InfraCluster controller would start attempting to reconcile infrastructure that it did not create, and therefore, there may be assumptions it makes that mean it cannot manage this infrastructure. | ||
|
||
To prevent this, we will have to implement (in the InfraCluster webhook) a means to prevent users converting externally managed InfraClusters into managed InfraClusters. | ||
|
||
Note however, converting from managed to externally managed should cause no issues and should be allowed. | ||
It will be documented as part of the externally managed contract that this is a one way operation. | ||
|
||
### Future Work | ||
|
||
#### Marking InfraCluster ready manually | ||
|
||
The content of this proposal assumes that the management of the external infrastructure is done by some controller which has the ability to set the spec and status of the InfraCluster resource. | ||
|
||
In reality, this may not be the case. For example, if the infrastructure was created by an admin using Terraform. | ||
|
||
When using a system such as this, a user can copy the details from the infrastructure into an InfraCluster resource and create this manually. | ||
However, they will not be able to set the InfraCluster to ready as this requires updating the resource status which is difficult when not using a controller. | ||
|
||
To allow users to adopt this external management pattern without the need for writing their own controllers or tooling, we will provide a longer term solution that allows a user to indicate that the infrastructure is ready and have the status set appropriately. | ||
|
||
The exact mechanism for how this will work is undecided, though the following ideas have been suggested: | ||
|
||
- Write a kubectl plugin that allows a user to mark their InfraCluster resources as ready | ||
|
||
- Add a secondary annotation to this contract that causes the provider InfraCluster controller to mark resources as ready | ||
|
||
## Alternatives | ||
|
||
We could have and adhoc CRD /~https://github.com/kubernetes-sigs/cluster-api/issues/4095 | ||
|
||
This would introduce complexity for the CAPI ecosystem with yet an additional CRD and it woudn't scale well across providers as it would need to contain provider specific information. | ||
|
||
## Upgrade Strategy | ||
|
||
Support is introduced by adding a new field for the provider infraCluster. | ||
|
||
This makes any transition backward compatible and leave the current managed behaviour untouched. | ||
|
||
The new field will be optional and default to "managed" | ||
|
||
## Additional Details | ||
|
||
## Implementation History | ||
|
||
- [ ] MM/DD/YYYY: Proposed idea in an issue or [community meeting] | ||
- [ ] MM/DD/YYYY: Compile a Google Doc following the CAEP template (link here) | ||
- [ ] MM/DD/YYYY: First round of feedback from community | ||
- [ ] MM/DD/YYYY: Present proposal at a [community meeting] | ||
- [ ] MM/DD/YYYY: Open proposal PR | ||
|
||
<!-- Links --> | ||
[community meeting]: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY |