Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix kubelet log level #2470

Merged

Conversation

arnaldo2792
Copy link
Contributor

Signed-off-by: Arnaldo Garcia Rincon agarrcia@amazon.com

Issue number:
N / A

Description of changes:

In b696d6f, the new kubernetes.log-level setting was implemented but in the actual model the name of the new setting was kubelet_log_level. This renames it to log_level since that's what is used in the templated configuration files and in the documentation.

If I attempt to set the log level, the drop-in file for the kubelet isn't updated:

bash-5.1# apiclient set kubernetes.kubelet-log-level=8
bash-5.1# cat /etc/systemd/system/kubelet.service.d/exec-start.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet \
    --cloud-provider aws \
    --kubeconfig /etc/kubernetes/kubelet/kubeconfig \
    --config /etc/kubernetes/kubelet/config \
    --container-runtime=remote \
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
    --containerd=/run/containerd/containerd.sock \
    --root-dir /var/lib/kubelet \
    --cert-dir /var/lib/kubelet/pki \
    --node-ip ${NODE_IP} \
    --node-labels "${NODE_LABELS}" \
    --register-with-taints "${NODE_TAINTS}" \
    --pod-infra-container-image ${POD_INFRA_CONTAINER_IMAGE}

-v <level> is missing which should be after --register-with-taints as shown here.

Testing done:
I started an aws-k8s-1.22 instance and run the following commands while tailing the journal logs:

# Turns off the logs
bash-5.1# apiclient set kubernetes.log-level=0
# Enables some logs
bash-5.1# apiclient set kubernetes.log-level=3
# Enables all the logs
bash-5.1# apiclient set kubernetes.log-level=9

I didn't see any errors and I confirmed the setting was used, whereas with the previous configuration this setting wasn't rendered:

bash-5.1# cat /etc/systemd/system/kubelet.service.d/exec-start.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet \
    --cloud-provider aws \
    --kubeconfig /etc/kubernetes/kubelet/kubeconfig \
    --config /etc/kubernetes/kubelet/config \
    --container-runtime=remote \
    --container-runtime-endpoint=unix:///run/dockershim.sock \
    --containerd=/run/dockershim.sock \
    --network-plugin cni \
    --root-dir /var/lib/kubelet \
    --cert-dir /var/lib/kubelet/pki \
    --node-ip ${NODE_IP} \
    --node-labels "${NODE_LABELS}" \
    --register-with-taints "${NODE_TAINTS}" \
    # This wasn't rendered
    -v 9 \
    --pod-infra-container-image ${POD_INFRA_CONTAINER_IMAGE}

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

In b696d6f, the new `kubernetes.log-level` setting was implemented but
in the actual model the name of the new setting was `kubelet_log_level`.
This renames it to `log_level` since that's what is used in the
templated configuration files and in the documentation.

Signed-off-by: Arnaldo Garcia Rincon <agarrcia@amazon.com>
@arnaldo2792 arnaldo2792 requested review from jpculp and yeazelm October 1, 2022 00:49
@arnaldo2792 arnaldo2792 merged commit 3a1f17c into bottlerocket-os:develop Oct 1, 2022
@webern
Copy link
Contributor

webern commented Oct 10, 2022

Isn't this a breaking change? Was this fixed before the incorrect wording was released?

@stmcginnis
Copy link
Contributor

Was this fixed before the incorrect wording was released?

Yes, these changes will be in the upcoming 1.10.0 release, so nothing has been released that we need to account for.

@arnaldo2792 arnaldo2792 deleted the fix-kubelet-loglevel branch October 26, 2022 18:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants