-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add feature azurerm_kubernetes_cluster_trusted_access_role_binding #871
Add feature azurerm_kubernetes_cluster_trusted_access_role_binding #871
Conversation
d112624
to
5a3af44
Compare
15c97b5
to
676473f
Compare
As far as I understand, - apiVersion: authorization.azure.upbound.io/v1beta1
kind: TrustedAccessRoleBinding
status:
conditions:
- lastTransitionTime: "2024-11-10T22:36:23Z"
message: 'cannot resolve references: mg.Spec.ForProvider.SourceResourceID: referenced
field was empty (referenced resource may not yet be ready)'
reason: ReconcileError
status: "False"
type: Synced
- apiVersion: machinelearningservices.azure.upbound.io/v1beta2
kind: Workspace
status:
conditions:
- lastTransitionTime: "2024-11-10T22:35:42Z"
message: |-
create failed: async create failed: failed to create the resource: ["***"0 creating Workspace (Subscription: "2895a7df-ae9f-41b8-9e78-3ce4926df838"
Resource Group Name: "example-tarb-rg"
Workspace Name: "example-tarb-wspace"): polling after CreateOrUpdate: polling failed: the Azure API returned the following error:
Status: "BadRequest"
Code: ""
Message: "Soft-deleted workspace exists. Please purge or recover it. https://aka.ms/wsoftdelete"
Activity Id: ""
---
API Response:
----[start]----
"***"
"status": "Failed",
"error": "***"
"code": "BadRequest",
"message": "Soft-deleted workspace exists. Please purge or recover it. https://aka.ms/wsoftdelete"
"***"
"***"
-----[end]-----
[]"***"]
reason: ReconcileError
status: "False"
type: Synced
- lastTransitionTime: "2024-11-10T22:35:27Z"
reason: Creating
status: "False"
type: Ready
- lastTransitionTime: "2024-11-10T22:35:42Z"
message: |-
async create failed: failed to create the resource: ["***"0 creating Workspace (Subscription: "2895a7df-ae9f-41b8-9e78-3ce4926df838"
Resource Group Name: "example-tarb-rg"
Workspace Name: "example-tarb-wspace"): polling after CreateOrUpdate: polling failed: the Azure API returned the following error:
Status: "BadRequest"
Code: ""
Message: "Soft-deleted workspace exists. Please purge or recover it. https://aka.ms/wsoftdelete"
Activity Id: ""
---
API Response:
----[start]----
"***"
"status": "Failed",
"error": "***"
"code": "BadRequest",
"message": "Soft-deleted workspace exists. Please purge or recover it. https://aka.ms/wsoftdelete"
"***"
"***"
-----[end]-----
[]"***"]
reason: AsyncCreateFailure
status: "False"
type: LastAsyncOperation |
@mergenci Thanks for the efforts to help. That issue is clear, that's just because there is a soft delete retention setting on the vault which apparently can't be disabled and must be set between 7 and 90 days as per this doc. |
That one seems to be missing the case.go:363: command "$***KUBECTL*** wait vault.keyvault.azure.upbound.io/example-ai-v --for=condition=Test --timeout 10s" exceeded 7 sec timeout, context deadline exceeded |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your effort @drew0ps, I left two small comments.
Here, let's come to the reason why uptest fails, while creating the resources in the example yaml file, two extra resources are created and during the deletion phase, the ResourceGroup cannot be deleted because of these resources and uptest fails:
message: |-
async delete failed: failed to delete the resource: [{0 deleting Resource Group "example-tarb-rg": the Resource Group still contains Resources.
Terraform is configured to check for Resources within the Resource Group when deleting the Resource Group - and
raise an error if nested Resources still exist to avoid unintentionally deleting these Resources.
Terraform has detected that the following Resources still exist within the Resource Group:
* `/subscriptions/2895a7df-ae9f-41b8-9e78-3ce4926df838/resourceGroups/example-tarb-rg/providers/microsoft.alertsmanagement/smartDetectorAlertRules/Failure Anomalies - example-ai-tarb`
* `/subscriptions/2895a7df-ae9f-41b8-9e78-3ce4926df838/resourceGroups/example-tarb-rg/providers/microsoft.insights/actiongroups/Application Insights Smart Detection`
Have you seen this in your manual tests? Frankly, I don't have much information about why these resources were created at the moment. Maybe we can test this manually (apply the yaml file, create and delete all resources, without intervening from the console) and investigate the reason if these two resources are created in your account.
9d2bc31
to
888aae2
Compare
Hi @turkenf, Thanks a lot for the response and your suggestions - i've added both of them to the example file. The "resource exists with the same name" issue is present for all resources in the example file until the deletion topic is solved. I'm not sure if it would be possible to force delete the resource group in azure? In my opinion that would be a good approach since we could end up in this situation in different cases as well. I am not sure if the resources created by my pipeline are running indefinitely in Azure. Unfortunately I can't test these specific examples manually as I could only test the AKS Trusted Access relation creation for the BackupInstanceKubernetesCluster at my workplace - where my user is only able to create some specific resources which I have to manage. |
Hi @drew0ps, in fact, all resources except the resource group in the example YAML file are deleted. Just randomizing the name of the workspace resource as you mentioned here is enough.
I don't know if this is possible, but even if it was, I wouldn't choose it.
I spent some time today cleaning these out of our test account 🙂
I understand and I really appreciate your interest, but I prefer not to proceed without understanding why the extra resources are being created and seeing if we can resolve this situation. |
Hi @turkenf, Thanks a lot for the additional insights - randomizing the workspace name makes sense with your explanation.
Sorry about this one, I won't run the pipeline until I figure this out manually. I thought after 6 hours, when the pipeline ends (times out?), the resources are somehow force deleted.
Understood - I'll spend a bit more time testing manually on why this happens and how to prevent it. The screenshot you added already helps a lot in this. |
👍 |
888aae2
to
3a79099
Compare
/test-examples="examples/alertsmanagement/v1beta1/monitorsmartdetectoralertrule.yaml" |
3a79099
to
a3d59e1
Compare
Signed-off-by: drew0ps <ad.marton@proton.me>
a3d59e1
to
c21c95d
Compare
/test-examples="examples/authorization/v1beta1/trustedaccessrolebinding.yaml" |
Hi @mergenci and @turkenf - Thanks a lot for the hints - the problem was some missing resources for the Application Insights, which get created by default if not defined in crossplane. Of course when not defined crossplane can't request their deletion. It's fixed now, uptest pipeline is green. |
Thank you for your contribution and persevering on this one @drew0ps |
Description of your changes
Adds azurerm_kubernetes_cluster_trusted_access_role_binding to the authorization provider.
Fixes #
I have:
make reviewable
to ensure this PR is ready for review.backport release-x.y
labels to auto-backport this PR if necessary.Notes
2 notable things about this PR:
How has this code been tested