[feat] Roll ValkeyNodes on changes to ValkeyCluster.Spec.Config#164
Conversation
Signed-off-by: Joseph Heyburn <jdheyburn@gmail.com>
Signed-off-by: Joseph Heyburn <jdheyburn@gmail.com>
| } | ||
| configHash := func() string { | ||
| cm := &corev1.ConfigMap{} | ||
| if err := r.Get(ctx, client.ObjectKey{Name: GetServerConfigMapName(cluster.Name), Namespace: cluster.Namespace}, cm); err != nil { |
There was a problem hiding this comment.
upsertConfigMap above could also return newServerConfigHash using (string, error) to avoid this Get, and then we cant get transient errors.
But if this is a temporary change and we don't want to change config.go I'm ok with the current impl.
There was a problem hiding this comment.
This also might be the problem in the failing testrun.
There seems to be an extra podroll in "should deploy ValkeyCluster with metrics exporter sidecar by default",
and that could have been due a timing issue on writing the config and Get:ing the config hash, so we might get an "" here... but its hard to tell from the logs really
There was a problem hiding this comment.
@bjosv I tried to replicate the failing e2e locally, it failed for another unrelated test. I re-ran the e2e tests on this PR and it passed. I think there might be some flakey e2e tests which could warrant an investigation. Let me know your thoughts - I am hoping to prioritise this PR so that we have rolling pods for 0.1, and iterate on it after the fact.
There was a problem hiding this comment.
Ok! We can keep an eye on it, it might be something only seen on Github at busy hours
| hashAnnotationKey: aclSecret.Annotations[hashAnnotationKey], | ||
| } | ||
| if node.Spec.ServerConfigHash != "" { | ||
| annotations[configHashKey] = node.Spec.ServerConfigHash |
There was a problem hiding this comment.
Just wanted to highlight that we have a bit mixed naming for these labels:
hashAnnotationKey = "valkey.io/internal-acl-hash"
configHashKey = "valkey.io/config-hash"
scriptsHashKey = "valkey.io/script-hash"
Something to handle in another PR maybe
There was a problem hiding this comment.
Agree. I anticipate config-hash and script-hash to go away once we refactor the configs. We can then decide on the naming of internal-acl-hash going fwd.
|
@bjosv Just checking despite your comments, this PR still gets your stamp of approval? |
bjosv
left a comment
There was a problem hiding this comment.
Looks good, was only hesitating due to the failing test.
We'll investigate the flaky tests.
| } | ||
| configHash := func() string { | ||
| cm := &corev1.ConfigMap{} | ||
| if err := r.Get(ctx, client.ObjectKey{Name: GetServerConfigMapName(cluster.Name), Namespace: cluster.Namespace}, cm); err != nil { |
There was a problem hiding this comment.
Ok! We can keep an eye on it, it might be something only seen on Github at busy hours
This PR closes #50
Summary
Will make ValkeyNodes roll sequentially if ValkeyCluster.Spec.Config is updated.
This avoids the need for the user to manually roll pods for them to pick up config changes.
Features / Behaviour Changes
Nodes now pick up config changes made on ValkeyCluster.Spec.Config.
Implementation
Added a
ServerConfigHashfield to ValkeyNode.Spec. When the valkey cluster controller updates this, it will cause a generation bump and trigger the underlying pod to be rolled.Note that I fully intend on this being a temporary solution. When we refactor configs as part of #141 - this can be removed. Given we are in alpha we can continue to make these kind of changes.
Limitations
N/A
Testing
I ran a local test and saw the change trickle down.
As part of #141 we should introduce some e2e tests for configs, but we'll revisit that after its been implemented.
Checklist
Before submitting the PR make sure the following are checked:
pre-commit run --all-filesor hooks on commit)