You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had built an Ubuntu 18 VM which had an attached Data Disk to capture a Managed Image for TeamCity to use. The NIC and both disks were configured to be deleted when the VM is deleted (not really important here, as that is not captured). Within my Cloud Profile I configured an Agent Image deploying resources to a Specific resource group using the Managed Image, and specified to re-use terminated VMs. All provisioned nicely.
When testing I updated the MI used within the Agent Image definition, I saw that TeamCity recognized that the existing Agents did not match the new Managed Image in the definition and sought to replace them. The old were deleted and new VMs were spun up to replace. That was fantastic! But, the additional Data Disks were left behind.
Here are the steps to reproduce and images to show what occurred.
Define an Agent Image and spin up a couple VMs to support load.
Update the Managed Image used by the Agent Image, and replace the VMs (either stop/start or by letting them age-out and get replaced by demand). In the image below, I had stopped lin-sm-2. The virtual machine, network interface, and OS disk were deleted, but the Data Disk was left behind.
After both were replaced and new ones started in their place, this is what remained in the Resource Group.
This new pic below is a listing of the Disks, showing those which have been orphaned.
As you can see, I'm starting a disk collection and when this grows to 100+ VMs it will become problematic.
It is known that specifying New resource group as the Agent Image Deployment is a work-around, but then our Resource Group list quickly becomes messy with 100+ RGs (and that is the reason we chose to use a Specific resource group).
Description
I had built an Ubuntu 18 VM which had an attached Data Disk to capture a Managed Image for TeamCity to use. The NIC and both disks were configured to be deleted when the VM is deleted (not really important here, as that is not captured). Within my Cloud Profile I configured an Agent Image deploying resources to a Specific resource group using the Managed Image, and specified to re-use terminated VMs. All provisioned nicely.
When testing I updated the MI used within the Agent Image definition, I saw that TeamCity recognized that the existing Agents did not match the new Managed Image in the definition and sought to replace them. The old were deleted and new VMs were spun up to replace. That was fantastic! But, the additional Data Disks were left behind.
Here are the steps to reproduce and images to show what occurred.
Define an Agent Image and spin up a couple VMs to support load.
Update the Managed Image used by the Agent Image, and replace the VMs (either stop/start or by letting them age-out and get replaced by demand). In the image below, I had stopped lin-sm-2. The virtual machine, network interface, and OS disk were deleted, but the Data Disk was left behind.
After both were replaced and new ones started in their place, this is what remained in the Resource Group.
This new pic below is a listing of the Disks, showing those which have been orphaned.
As you can see, I'm starting a disk collection and when this grows to 100+ VMs it will become problematic.
It is known that specifying New resource group as the Agent Image Deployment is a work-around, but then our Resource Group list quickly becomes messy with 100+ RGs (and that is the reason we chose to use a Specific resource group).
Environment
Diagnostic logs
None provided with this issue.
The text was updated successfully, but these errors were encountered: