propagate nodeSelector to pod spec#69
Conversation
e5d9448 to
f4dbff6
Compare
| containers := generateContainersDef(cluster) | ||
| nodeSelector := cluster.Spec.NodeSelector | ||
| if cluster.Spec.Affinity != nil { | ||
| nodeSelector = nil |
There was a problem hiding this comment.
I guess you could have both Affinity and a NodeSelector configured at the same time (See note at the end of this section).
I might have missed something, is there a reason for blocking a user to finetune placement by resetting nodeSelector here?
There was a problem hiding this comment.
This is just to align with line 67 in https://github.com/valkey-io/valkey-operator/blob/main/api/v1alpha1/valkeycluster_types.go. I'm not sure why we wanted the override in the first place TBH, but that’s probably a discussion for another time.
There was a problem hiding this comment.
I think we should remove this logic. A user may specify a nodeSelector, and a podAffinity / podAntiAffinity and not a nodeAffinity. If a user has declared a config we should allow it to propagate and not remove on their behalf.
f4dbff6 to
93af199
Compare
|
@jdheyburn Could you kindly review this? Thanks! |
jdheyburn
left a comment
There was a problem hiding this comment.
Thanks for the contribution! I would rather we did not override the nodeSelector when affinity is set.
If we're worried about conflicts, we can address them in a future release.
Does that sound ok?
| containers := generateContainersDef(cluster) | ||
| nodeSelector := cluster.Spec.NodeSelector | ||
| if cluster.Spec.Affinity != nil { | ||
| nodeSelector = nil |
There was a problem hiding this comment.
I think we should remove this logic. A user may specify a nodeSelector, and a podAffinity / podAntiAffinity and not a nodeAffinity. If a user has declared a config we should allow it to propagate and not remove on their behalf.
| d := createClusterDeployment(cluster) | ||
|
|
||
| assert.Nil(t, d.Spec.Template.Spec.NodeSelector, "node selector should be nil when affinity set") | ||
| } |
There was a problem hiding this comment.
This test can be removed once the previous comment is addressed.
Signed-off-by: yang.qiu <yang.qiu@reddit.com>
dd42d22 to
19a9c1c
Compare
Signed-off-by: yang.qiu <yang.qiu@reddit.com>
19a9c1c to
f08577f
Compare
Issue
Testing in an EKS cluster with 2 values for label

topology.kubernetes.io/zone:Without adding
nodeSelector, when applyingconfig/samples/v1alpha1_valkeycluster.yaml, valkey pods were placed on nodes with labelstopology.kubernetes.io/zone=us-east-1bandtopology.kubernetes.io/zone=us-east-1c:After adding

nodeSelector: topology.kubernetes.io/zone: us-east-1bin the manifest, nodeSelector was successfully applied:and valkey pods were only placed on nodes with label

topology.kubernetes.io/zone=us-east-1b.If we change the node selector to

topology.kubernetes.io/zone: us-east-1a, the pods stayed pending indefinitely because there's no node with this label:Multiple selectors:

Adding
nodeSelector: topology.kubernetes.io/zone: us-east-1b kubernetes.io/hostname: ip-10-9-58-63.ec2.internalin the manifest put all valkey pods on the same node inus-east-1b: