Skip to content

azurerm_databricks_workspace - Remove the PATCH (for tags) during Update()#31509

Merged
sreallymatt merged 6 commits intohashicorp:mainfrom
magodo:databricks_update_order
Jan 21, 2026
Merged

azurerm_databricks_workspace - Remove the PATCH (for tags) during Update()#31509
sreallymatt merged 6 commits intohashicorp:mainfrom
magodo:databricks_update_order

Conversation

@magodo
Copy link
Collaborator

@magodo magodo commented Jan 15, 2026

Community Note

  • Please vote on this PR by adding a 👍 reaction to the original PR to help the community and maintainers prioritize for review
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for PR followers and do not help prioritize for review

Description

The PATCH request for the workspace can only update the tags. The difference between updating it via PUT is that the PATCH will also update the tags of the resources managed by this workspace, including disk encryption set. Most these resources don't need additional permission to update the tags, including the disk encryption set, even with CMK enabled.

The exception is that when managed_disk_cmk_rotation_to_latest_version_enabled is enabled, the update of the disk encryption set will access to the key vault key during the update of the tags (probably because it needs to find the latest version of the key, for validation/retrieving the latest tag). This will then fail a potential user operation that tries to update the tags and enable the managed_disk_cmk_rotation_to_latest_version_enabled in the same run:

   Error: updating Workspace (Subscription: "****"
   Resource Group Name: "****"
   Workspace Name: "****") Tags: performing Update: unexpected status 400 (400 Bad Request) with error: ApplicationUpdateFail: Failed to update application: '****', because patch resource group failure.

The proper sequence for the above would be:

  • PUT the workspace to update properties, including managed_disk_cmk_rotation_to_latest_version_enabled. This will then expose the managed_disk_identity.
  • Assign a proper data plane role (e.g. "Key Vault Crypto Service Encryption User") or access policy to the returned managed_disk_identity.0.principal_id, which is the identity that will be used to access the CMK when patching the tags
  • PATCH the workspace's tags

The current implementation of Update() has the PUT then PATCH (if changed) sequence, which doesn't give user a chance to do the role assignment in between. The user has to separate the update task into two steps.

This PR changes the order to be PATCH (if changed) then PUT. In this way, assuming the managed_disk_cmk_rotation_to_latest_version_enabled is disabled, the PATCH doesn't require additional data plane permissions to updating the tags. After the Update(), the user is expected to assign the role to the newly populated managed_disk_identity, so that they can continue managing the resources without a problem.

A test case is added (actually extending the original test case) to simulate this update process.

PR Checklist

  • I have followed the guidelines in our Contributing Documentation.
  • I have checked to ensure there aren't other open Pull Requests for the same update/change.
  • I have checked if my changes close any open issues. If so please include appropriate closing keywords below.
  • I have updated/added Documentation as required written in a helpful and kind way to assist users that may be unfamiliar with the resource / data source.
  • I have used a meaningful PR title to help maintainers and other users understand this change and help prevent duplicate work.
    For example: “resource_name_here - description of change e.g. adding property new_property_name_here

Changes to existing Resource / Data Source

  • I have added an explanation of what my changes do and why I'd like you to include them (This may be covered by linking to an issue above, but may benefit from additional explanation).
  • I have written new tests for my resource or datasource changes & updated any relevant documentation.
  • I have successfully run tests with my changes locally. If not, please provide details on testing challenges that prevented you running the tests.
  • (For changes that include a state migration only). I have manually tested the migration path between relevant versions of the provider.

Testing

  • My submission includes Test coverage as described in the Contribution Guide and the tests pass. (if this is not possible for any reason, please include details of why you did or could not add test coverage)
magodo in 🌐 magodo-desktop in terraform-provider-azurerm on  databricks_update_order via 🐹 v1.25.5 took 34m1s
💢 TF_ACC=1 go test -timeout=60m -v -run='TestAccDatabricksWorkspace_managedDiskCMK' ./internal/services/databricks
=== RUN   TestAccDatabricksWorkspace_managedDiskCMK
=== PAUSE TestAccDatabricksWorkspace_managedDiskCMK
=== RUN   TestAccDatabricksWorkspace_managedDiskCMKRotation
=== PAUSE TestAccDatabricksWorkspace_managedDiskCMKRotation
=== CONT  TestAccDatabricksWorkspace_managedDiskCMK
=== CONT  TestAccDatabricksWorkspace_managedDiskCMKRotation
--- PASS: TestAccDatabricksWorkspace_managedDiskCMK (1274.22s)
--- PASS: TestAccDatabricksWorkspace_managedDiskCMKRotation (2663.31s)
PASS
ok      github.com/hashicorp/terraform-provider-azurerm/internal/services/databricks    2663.336s

Change Log

Below please provide what should go into the changelog (if anything) conforming to the Changelog Format documented here.

  • azurerm_resource - support for the thing1 property [GH-00000]

This is a (please select all that apply):

  • Bug Fix
  • New Feature (ie adding a service, resource, or data source)
  • Enhancement
  • Breaking Change

Related Issue(s)

Related to: #25766, #22394

AI Assistance Disclosure

  • AI Assisted - This contribution was made by, or with the assistance of, AI/LLMs

Rollback Plan

If a change needs to be reverted, we will publish an updated version of the provider.

Changes to Security Controls

Are there any changes to security controls (access controls, encryption, logging) in this pull request? If so, explain.

Note

If this PR changes meaningfully during the course of review please update the title and description as required.

…nd PUT during `Update()`

The `PATCH` request for the workspace can only update the `tags`. The difference between updating it via `PUT` is that the `PATCH` will also update the `tags` of the resources managed by this workspace, including disk encryption set. Most these resources don't need additional permission to update the `tags`, including the disk encryption set, even with CMK enabled.

The exception is that when `managed_disk_cmk_rotation_to_latest_version_enabled` is enabled, the update of the disk encryption set will access to the key vault key during the update of the `tags` (probably because it needs to find the latest version of the key, for validation/retrieving the latest tag). This will then fail a potential user operation that tries to update the `tags` and enable the `managed_disk_cmk_rotation_to_latest_version_enabled` in the same run:

>        Error: updating Workspace (Subscription: "****"
>        Resource Group Name: "****"
>        Workspace Name: "****") Tags: performing Update: unexpected status 400 (400 Bad Request) with error: ApplicationUpdateFail: Failed to update application: '****', because patch resource group failure.

The proper sequence for the above would be:

- PUT the workspace to update properties, including `managed_disk_cmk_rotation_to_latest_version_enabled`. This will then expose the `managed_disk_identity`.
- Assign a proper data plane role (e.g. "Key Vault Crypto Service Encryption User") or access policy to the returned `managed_disk_identity.0.principal_id`, which is the identity that will be used to access the CMK when patching the tags
- PATCH the workspace's tags

The current implementation of `Update()` has the `PUT` then `PATCH` (if changed) sequence, which doesn't give user a chance to do the role assignment in between. The user has to separate the update task into two steps.

This PR changes the order to be `PATCH` (if changed) then `PUT`. In this way, assuming the `managed_disk_cmk_rotation_to_latest_version_enabled` is disabled, the `PATCH` doesn't require additional data plane permissions to updating the tags. After the `Update()`, the user is expected to assign the role to the newly populated `managed_disk_identity`, so that they can continue managing the resources without a problem.
},
data.ImportStep("custom_parameters.0.public_subnet_network_security_group_association_id", "custom_parameters.0.private_subnet_network_security_group_association_id"),
{
Config: r.managedDiskCMKRotationEnabled(data, databricksPrincipalID),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to disable it again in the test? or the role assignment can not be assigned via azureRM provider?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When disable it again, we can't simply just revert everything back as well as updating the tag. In fact, if reverting back to "disabled" without changing the tags, it will work. But if there is also a change to the tags, it will fail. The reason is because in this case the additional resources will be deleted prior to updating the workspace, including the role assignment that is assigned to the datadiskencryption SP. Then later on when updating the tags for the workspace, due to the lack of the role, it will fail with the same error message as in the description of this PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to swap the order during update that disabling the CMK in: 7d8487e. Whilst, it turns out databricks doesn't support disabling CMK once enabled: #22394 (comment). Hence I reverted the above commit.

@ziyeqf
Copy link
Collaborator

ziyeqf commented Jan 15, 2026

Thanks! @magodo, just one minor question and mostly LGTM

WodansSon
WodansSon previously approved these changes Jan 15, 2026
Copy link
Collaborator

@WodansSon WodansSon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @magodo, I have given this a look and the LGTM! 🚀

@magodo magodo changed the title azurerm_databricks_workspace - Swap the order of PATCH (for tags) and PUT during Update() WIP: azurerm_databricks_workspace - Swap the order of PATCH (for tags) and PUT during Update() Jan 16, 2026
@ziyeqf
Copy link
Collaborator

ziyeqf commented Jan 16, 2026

LGTM

@magodo magodo changed the title WIP: azurerm_databricks_workspace - Swap the order of PATCH (for tags) and PUT during Update() azurerm_databricks_workspace - Swap the order of PATCH (for tags) and PUT during Update() Jan 16, 2026
Copy link
Collaborator

@sreallymatt sreallymatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @magodo, looking back in the history as to why this additional PATCH call was added, it looks like it was a workaround to a problem in the RP.

Was there a reason you didn't opt to remove the additional PATCH request instead of flipping the order?

When testing locally I see this behaviour:

  • With v4.57.0 of the provider (i.e. PUT then PATCH), I can update the tags, and those updates are propagated to all managed resources with the exception of the NAT gateway and the NAT Gateway Public IP.
  • Removing the PATCH request, building provider and doing the same thing (i.e. create with a tag, then update the tag in another apply), the behaviour is the same, the updates are propagated to all managed resources with the exception of the NAT GW/GW PIP

(edit to clarify: With the above, I was focused on testing the tag propagation behaviour, which was the reason for this additional request in the first place. I did not fully test all combinations with managed_disk_cmk_rotation_to_latest_version_enabled and related properties)

@magodo
Copy link
Collaborator Author

magodo commented Jan 20, 2026

@sreallymatt Great spot! I didn't check whether the API issue reported in Azure/azure-sdk-for-go#14571 has been fixed or not. I've verified the same process as you did, with the managed_disk_cmk_rotation_to_latest_version_enabled related properties included in the update apply. The resulted disk encryption set will be created successfully with the updated tags. Also I can verify that the managed resources' tags are updated except the NAT GW and the PIP.

I'll remove the PATCH request per your suggestion and add the tags to the PUT request body in Update().

💤 TF_ACC=1 go test -timeout=100m -v -run='TestAccDatabricksWorkspace_managedDiskCMK' ./internal/services/databricks
=== RUN   TestAccDatabricksWorkspace_managedDiskCMK
=== PAUSE TestAccDatabricksWorkspace_managedDiskCMK
=== RUN   TestAccDatabricksWorkspace_managedDiskCMKRotation
=== PAUSE TestAccDatabricksWorkspace_managedDiskCMKRotation
=== CONT  TestAccDatabricksWorkspace_managedDiskCMK
=== CONT  TestAccDatabricksWorkspace_managedDiskCMKRotation
--- PASS: TestAccDatabricksWorkspace_managedDiskCMK (1317.08s)
--- PASS: TestAccDatabricksWorkspace_managedDiskCMKRotation (1440.81s)
PASS
ok      github.com/hashicorp/terraform-provider-azurerm/internal/services/databricks    1440.828s

@magodo magodo changed the title azurerm_databricks_workspace - Swap the order of PATCH (for tags) and PUT during Update() azurerm_databricks_workspace - Remove the PATCH (for tags) during Update() Jan 20, 2026
Copy link
Collaborator

@sreallymatt sreallymatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for double-checking @magodo! Just one final comment to resolve

Comment on lines +1253 to +1255
if d.HasChange("tags") {
workspaceUpdate := workspaces.WorkspaceUpdate{
Tags: tags.Expand(d.Get("tags").(map[string]interface{})),
}
model.Tags = tags.Expand(d.Get("tags").(map[string]interface{}))
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed as well given there's an identical block on ln971-973

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sreallymatt Oops, overlooked that.. Removed. Thx!

Copy link
Collaborator

@sreallymatt sreallymatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @magodo - LGTM 🚀

@sreallymatt sreallymatt merged commit 3468ceb into hashicorp:main Jan 21, 2026
34 checks passed
@github-actions github-actions bot added this to the v4.58.0 milestone Jan 21, 2026
@github-actions
Copy link
Contributor

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 21, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants