Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus is not Monitoring kube-proxy #1603

Closed
malhomaid opened this issue Jan 28, 2022 · 5 comments · Fixed by #1630
Closed

Prometheus is not Monitoring kube-proxy #1603

malhomaid opened this issue Jan 28, 2022 · 5 comments · Fixed by #1630
Assignees
Labels

Comments

@malhomaid
Copy link

Hello Everyone,

I'm using kube-prometheus, and I'm having an issue monitoring kube-proxy(same as the shared URL). I did make kube-proxy listen on 0.0.0.0 and created applied the manifests below(generated after adding kubernetesControlPlane+: {kubeProxy: true} using the jsonnet template)

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  labels:
    app.kubernetes.io/name: kube-prometheus
    app.kubernetes.io/part-of: kube-prometheus
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: monitoring
spec:
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - kube-system
  podMetricsEndpoints:
  - honorLabels: true
    relabelings:
    - action: replace
      regex: (.*)
      replacement: $1
      sourceLabels:
      - __meta_kubernetes_pod_node_name
      targetLabel: instance
    targetPort: 10249
  selector:
    matchLabels:
      k8s-app: kube-proxy

Environment

  • Prometheus Operator version: v0.53.1
  • Kubernetes version information: v1.21.5
  • Kubernetes cluster kind: OKE
  • Prometheus Discovered Labels(from UI Service Discovery) and the Target Labels is Dropped:
__address__="10.0.43.27"
__meta_kubernetes_namespace="kube-system"
__meta_kubernetes_pod_annotation_checksum_config="ecd1e3788c2727c6"
__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_restartedAt="2022-01-28T01:34:10+03:00"
__meta_kubernetes_pod_annotationpresent_checksum_config="true"
__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_restartedAt="true"
__meta_kubernetes_pod_container_init="false"
__meta_kubernetes_pod_container_name="kube-proxy"
__meta_kubernetes_pod_controller_kind="DaemonSet"
__meta_kubernetes_pod_controller_name="kube-proxy"
__meta_kubernetes_pod_host_ip="10.0.43.27"
__meta_kubernetes_pod_ip="10.0.43.27"
__meta_kubernetes_pod_label_controller_revision_hash="7c6c5bf7dc"
__meta_kubernetes_pod_label_k8s_app="kube-proxy"
__meta_kubernetes_pod_label_pod_template_generation="10"
__meta_kubernetes_pod_labelpresent_controller_revision_hash="true"
__meta_kubernetes_pod_labelpresent_k8s_app="true"
__meta_kubernetes_pod_labelpresent_pod_template_generation="true"
__meta_kubernetes_pod_name="kube-proxy-xncnb"
__meta_kubernetes_pod_node_name="10.0.43.27"
__meta_kubernetes_pod_phase="Running"
__meta_kubernetes_pod_ready="true"
__meta_kubernetes_pod_uid="83e73d69-f50a-42c4-a7ef-287cbf0f339a"
__metrics_path__="/metrics"
__scheme__="http"
__scrape_interval__="30s"
__scrape_timeout__="10s"
job="podMonitor/monitoring/kube-proxy/0"

I still can't find any kube-proxy active target in Prometheus(attached screenshot)

I tried to hit the kube-proxy metrics endpoint from Prometheus pod wget -qO - http://10.0.45.126:10249/metrics, and it's working fine!

@mashail
Copy link

mashail commented Jan 28, 2022

@mhomaid1 make sure the PodMonitor is the prometheus targets. May be the PodMonitor is not seen by the prometheus server.

@malhomaid
Copy link
Author

@mashail PodMonitor is in the same namespace(monitoring). I can't see it in the targets, but I can see it in the Service Discovery with 0 active targets podMonitor/monitoring/kube-proxy/0 (0 / 27 active targets)

It's weird that the Target Labels for the PodMonitor is Dropped, it doesn't seem like an issue with relabelings

@philipgough philipgough self-assigned this Feb 4, 2022
@HubbeKing
Copy link

I ran into this same issue on my bare metal kubeadm cluster. No clue what's causing it, but adding a containerPort definition to the kube-proxy DaemonSet pod spec seemed to fix it for me. Check your kube-proxy pods and see if they've got port definitions?
kubectl -n kube-system get daemonset kube-proxy -o yaml snippet:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
spec:
  template:
    spec:
      containers:
      - image: k8s.gcr.io/kube-proxy:v1.23.3
        imagePullPolicy: IfNotPresent
        name: kube-proxy
        ports:
        - containerPort: 10249
          hostPort: 10249
          name: metrics
          protocol: TCP
...

@DevFontes
Copy link

Solved the problem but the question is how to automate this?

@devopsuser30k
Copy link

Thank you @HubbeKing. That helps me..

Here is my two cents without editing the ds config..

`- job_name: kube-proxy
honor_labels: true
kubernetes_sd_configs:

  • role: pod
    relabel_configs:
  • action: keep
    source_labels:
    • __meta_kubernetes_namespace
    • __meta_kubernetes_pod_name
      separator: '/'
      regex: 'kube-system/kube-proxy.+'
  • source_labels:
    • address
      action: replace
      target_label: address
      regex: (.+?)(\:\d+)?
      replacement: $1:10249`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants