refactor: designate VM instance-specific acceptance tests#1185
refactor: designate VM instance-specific acceptance tests#1185
Conversation
03cec38 to
6080df2
Compare
82a0292 to
b6184df
Compare
b6184df to
edd7054
Compare
…m-acceptance-tests
tallaxes
left a comment
There was a problem hiding this comment.
Would like to understand a little better what is it that makes certain kinds of tests not applicable to AKSMachineAPI mode, and whether the differences are significant enough to warrant full split. Some of the invariants captured in these tests are fundamental enough that, if they break in a different mode, it is a reason for concern. (Though maybe this is just about different expectations re internal API calls?) Worried about duplicating the test logic to the extent that makes it hard to maintain multiple paths; wondering if a bit of generalization might help avoid that.
(Also wondering about exactly which mechanism is proposed for running different sets of tests in different modes, though that should not be too complicated.)
…m-acceptance-tests
|
(discussed some offline)
Some of the big concerns that prevent easy unification, apart from API interface/fake expectations, are:
A "generalized" alternative would be to have if statements in each test: some just to have different validation implementation, and some to skip inapplicable tests (e.g., networking labels). This, together with potential recategorization (especially instancetype module) will be reconsidered in the next iteration. It would still be beneficial, just don't think it is the most prioritized given the effort to resolve the above. This will also be up to the project's direction of whether we want to maintain multiple provision modes. Expected behavioral differences are also noted in the design docs. Some are also more difficult to fit into the generalized/unified model. E.g., the API call phase that quota error shall return.
It would be to change the options in |
tallaxes
left a comment
There was a problem hiding this comment.
Approving the current testing "split" approach for now. Still concerned that it creates something that is harder to maintain and reason about; would love to see what a unified testing approach - explicitly reflecting the differences in expectations between modes - would look like. (Agree there is some extra work in categorizing / organizing the tests, but this should be manageable.)
This will also be up to the project's direction of whether we want to maintain multiple provision modes.
With the potential need for supporting a more generic Azure (vs AKS) provisioning, multiple provision modes are likely not going away.
Fixes #
Description
Upcoming Machine API integration will introduce a kind of instance with different assumptions than VM instances, which is the only supported type as of today.
Most of the existing acceptance tests are not immediately compatible with this new assumption.
This PR will designate those tests as tests specific to VM instances, so that Machine API integration changes can add new ones with clear distinction.
A considered alternative was to refactor the tests to be more generic. Although, given that each test has different degree of incompatibility, it is estimated to not be worth it.
How was this change tested?
Does this change impact docs?
Release Note