• /
  • EnglishEspañol日本語한국어Português
  • EntrarComeçar agora

Overview

The Kubernetes integration version 2 has some different settings and requirements than version 3. This document goes over the settings that are different from version 3 and that you'll need for version 2. If we don't specify anything different, you can use the settings for version 3.

Cuidado

New Relic has deprecated version 2 and recommends against using it. We maintain this documentation for users who are still using version 2 even though we no longer support it.

See Introduction to the Kubernetes integration to get started with the current version of Kubernetes.

To understand version 3 changes, see the Changes introduced in the Kubernetes integration version 3 document.

Monitoring control plane with integration version 2

This section covers how to configure control plane monitoring on versions 2 and earlier of the integration.

Please note that these versions had a less flexible autodiscovery options, and did not support external endpoints. We strongly recommend you to update to version 3 at your earliest convenience.

Autodiscovery and default configuration: hostNetwork and privileged

In versions lower than v3, when the integration is deployed using privileged: false, the hostNetwork setting for the control plane component will be also be set to false.

Discovery of control plane nodes and control plane components

The Kubernetes integration relies on the kubeadm labeling conventions to discover the control plane nodes and the control plane components. This means that control plane nodes should be labeled with node-role.kubernetes.io/control-plane="".

The control plane components should have either the k8s-app or the tier and component labels. See this table for accepted label combinations and values:

Component

Label

Endpoint

API server

Kubeadm / Kops / ClusterAPI

k8s-app=kube-apiserver

tier=control-plane component=kube-apiserver

OpenShift

app=openshift-kube-apiserver apiserver=true

localhost:443/metrics by default (can be configured) if the request fails falls back to localhost:8080/metrics

etcd

Kubeadm / Kops / ClusterAPI

k8s-app=etcd-manager-main

tier=control-plane component=etcd

OpenShift

k8s-app=etcd

localhost:4001/metrics

Scheduler

Kubeadm / Kops / ClusterAPI

k8s-app=kube-scheduler

tier=control-plane component=kube-scheduler

OpenShift

app=openshift-kube-scheduler scheduler=true

localhost:10251/metrics

Controller manager

Kubeadm / Kops / ClusterAPI

k8s-app=kube-controller-manager

tier=control-plane component=kube-controller-manager​

OpenShift

app=kube-controller-manager kube-controller-manager=true

localhost:10252/metrics

When the integration detects that it's running inside a control plane node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint.

Configuration

Control plane monitoring is automatic for agents running inside control plane nodes. The only component that requires an extra step to run is etcd, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the Secure Port.

Importante

Control plane monitoring for OpenShift 4.x requires additional configuration. For more information, see the OpenShift 4.x Configuration section.

etcd

In order to set mTLS for querying etcd, you need to set these two configuration options:

Option

Value

ETCD_TLS_SECRET_NAME

Name of a Kubernetes secret that contains the mTLS configuration.

The secret should contain the following keys:

  • cert: the certificate that identifies the client making the request. It should be signed by an etcd trusted CA.

  • key: the private key used to generate the client certificate.

  • cacert: the root CA used to identify the etcd server certificate.

    If the ETCD_TLS_SECRET_NAME option is not set, etcd metrics won't be fetched.

ETCD_TLS_SECRET_NAMESPACE

The namespace where the secret specified in the ETCD_TLS_SECRET_NAME was created. If not set, the default namespace is used.

API server

By default, the API server metrics are queried using the localhost:8080 unsecured endpoint. If this port is disabled, you can also query these metrics over the secure port. To enable this, set the following configuration option in the Kubernetes integration manifest file:

Option

Value

API_SERVER_ENDPOINT_URL

The (secure) URL to query the metrics. The API server uses localhost:443 by default

Ensure that the ClusterRole has been updated to the newest version found in the manifest

Added in version 1.15.0

Importante

Note that the port can be different according to the secure port used by the API server.

For example, in Minikube the API server secure port is 8443 and therefore API_SERVER_ENDPOINT_URL should be set to https://localhost:8443

OpenShift configuration

Control plane components on OpenShift 4.x use endpoint URLs that require SSL and service account based authentication. Therefore, you can't use the default endpoint URLs.

Importante

When installing openshift through Helm, specify the configuration to automatically include these endpoints. Setting openshift.enabled=true and openshift.version="4.x" will include the secure endpoints and enable the /var/run/crio.sock runtime.

To configure control plane monitoring on OpenShift, uncomment the following environment variables in the customized manifest. URL values are pre-configured to the default base URLs for the control plane monitoring metrics endpoints in OpenShift 4.x.

- name: "SCHEDULER_ENDPOINT_URL"
value: "https://localhost:10259
- name: "ETCD_ENDPOINT_URL"
value: "https://localhost:9979"
- name: "CONTROLLER_MANAGER_ENDPOINT_URL"
value: "https://localhost:10257"
- name: "API_SERVER_ENDPOINT_URL"
value: "https://localhost:6443"

Importante

Even though the custom ETCD_ENDPOINT_URL is defined, etcd requires HTTPS and mTLS authentication to be configured. For more on configuring mTLS for etcd in OpenShift, see Set up mTLS for etcd in OpenShift.

Kubernetes logs

If you want to generate verbose logs and get version and configuration information, just check out the info below.

Monitor services running on Kubernetes

Monitoring services in Kubernetes works by leveraging our infrastructure agent and on-host integrations and an autodiscovery mechanism to point them to pods with a specified selector.

Check the Enable monitoring of services using the Helm Chart doc to know how to do it. Check out this example for version 2, which shows the yaml config for the Redis integration added to the values.yml file of the nri-bundle chart.

newrelic-infrastructure:
integrations_config:
- name: nri-redis.yaml
data:
discovery:
command:
# Run NRI Discovery for Kubernetes
# https://github.com/newrelic/nri-discovery-kubernetes
exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250
match:
label.app: redis
integrations:
- name: nri-redis
env:
# using the discovered IP as the hostname address
HOSTNAME: ${discovery.ip}
PORT: 6379
labels:
env: test

Add a service YAML to the Kubernetes integration config

If you're using Kubernetes integration version 2, you need to add an entry for this ConfigMap in the volumes and volumeMounts section of the DaemonSet's spec, to ensure all the files in the ConfigMap are mounted in /etc/newrelic-infra/integrations.d/.

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.