Skip to content

refactor: designate VM instance-specific acceptance tests#1185

Merged
comtalyst merged 10 commits intomainfrom
comtalyst/designate-vm-acceptance-tests
Dec 23, 2025
Merged

refactor: designate VM instance-specific acceptance tests#1185
comtalyst merged 10 commits intomainfrom
comtalyst/designate-vm-acceptance-tests

Conversation

@comtalyst
Copy link
Collaborator

Fixes #

Description

Upcoming Machine API integration will introduce a kind of instance with different assumptions than VM instances, which is the only supported type as of today.
Most of the existing acceptance tests are not immediately compatible with this new assumption.
This PR will designate those tests as tests specific to VM instances, so that Machine API integration changes can add new ones with clear distinction.

A considered alternative was to refactor the tests to be more generic. Although, given that each test has different degree of incompatibility, it is estimated to not be worth it.

How was this change tested?

Does this change impact docs?

  • Yes, PR includes docs updates
  • Yes, issue opened: #
  • No

Release Note


@comtalyst comtalyst force-pushed the comtalyst/vm-instance-renames branch 2 times, most recently from 03cec38 to 6080df2 Compare September 30, 2025 23:07
@comtalyst comtalyst force-pushed the comtalyst/designate-vm-acceptance-tests branch from 82a0292 to b6184df Compare October 1, 2025 00:19
Base automatically changed from comtalyst/vm-instance-renames to main October 18, 2025 00:05
@comtalyst comtalyst force-pushed the comtalyst/designate-vm-acceptance-tests branch from b6184df to edd7054 Compare October 24, 2025 21:12
Copy link
Collaborator

@tallaxes tallaxes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like to understand a little better what is it that makes certain kinds of tests not applicable to AKSMachineAPI mode, and whether the differences are significant enough to warrant full split. Some of the invariants captured in these tests are fundamental enough that, if they break in a different mode, it is a reason for concern. (Though maybe this is just about different expectations re internal API calls?) Worried about duplicating the test logic to the extent that makes it hard to maintain multiple paths; wondering if a bit of generalization might help avoid that.

(Also wondering about exactly which mechanism is proposed for running different sets of tests in different modes, though that should not be too complicated.)

@comtalyst
Copy link
Collaborator Author

(discussed some offline)

Would like to understand a little better what is it that makes certain kinds of tests not applicable to AKSMachineAPI mode, and whether the differences are significant enough to warrant full split. Some of the invariants captured in these tests are fundamental enough that, if they break in a different mode, it is a reason for concern. (Though maybe this is just about different expectations re internal API calls?)

Some of the big concerns that prevent easy unification, apart from API interface/fake expectations, are:

  • The expectation/responsibility of managing NIC-related configurations in general
  • The expectation/responsibility of managing CustomData-level configurations (e.g., networking labels)
    • Without this in machine mode, most tests will lose ties to instancetype module
  • Running the same set of test code with different options: need some organization here

A "generalized" alternative would be to have if statements in each test: some just to have different validation implementation, and some to skip inapplicable tests (e.g., networking labels). This, together with potential recategorization (especially instancetype module) will be reconsidered in the next iteration. It would still be beneficial, just don't think it is the most prioritized given the effort to resolve the above.

This will also be up to the project's direction of whether we want to maintain multiple provision modes.

Expected behavioral differences are also noted in the design docs. Some are also more difficult to fit into the generalized/unified model. E.g., the API call phase that quota error shall return.

Also wondering about exactly which mechanism is proposed for running different sets of tests in different modes, though that should not be too complicated.

It would be to change the options in BeforeEach for a different mode.

Copy link
Collaborator

@tallaxes tallaxes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving the current testing "split" approach for now. Still concerned that it creates something that is harder to maintain and reason about; would love to see what a unified testing approach - explicitly reflecting the differences in expectations between modes - would look like. (Agree there is some extra work in categorizing / organizing the tests, but this should be manageable.)

This will also be up to the project's direction of whether we want to maintain multiple provision modes.

With the potential need for supporting a more generic Azure (vs AKS) provisioning, multiple provision modes are likely not going away.

@comtalyst comtalyst merged commit 3c5eeb4 into main Dec 23, 2025
16 checks passed
@comtalyst comtalyst deleted the comtalyst/designate-vm-acceptance-tests branch December 23, 2025 19:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants