Unless otherwise noted, configuration options for your Prometheus OpenMetrics integration with New Relic apply to both Docker and Kubernetes environments. At a minimum, the following configuration values are required:
Recommendation: Configure your New Relic license key as an environment variable named LICENSE_KEY. This provides a more secure environment, as New Relic can load your environment variable from a mutual TLS authentication secret.
Configure nri-prometheus-latest.yaml
The nri-prometheus-latest.yaml manifest file includes the nri-prometheus-cfg map showing an example configuration. Use the manifest file to configure the following parameters.
Here are some key names and definitions for your Prometheus OpenMetrics config file.
Key name
Description
cluster_name
Required.
The name of the cluster. This value will be included as the clusterName attribute for all metrics.
verbose
Stringified boolean.
true (default): Logs debugging information.
false: Only logs error messages.
targets
Configuration of static endpoints to be scraped by the integration. It contains a list of objects. For more information about this structure, see the documentation about target configuration.
scrape_enabled_label
Kubernetes
String. The integration will check if the Kubernetes pod and service are annotated or have a label with this value to decide if it has to be scraped.
This is particularly useful when you want to limit the amount of data by ignoring metrics or including specific metrics that are sent to New Relic. Since by default we use the same label Prometheus uses to discover targets that can be scraped, most exporters that you install automatically set this label.
To keep a fine-grained control on the targets you want the integration to scrape, you can set this option to some other value (such as newrelic/scrape) and then add the annotation or label newrelic/scrape: "true" to your Kubernetes objects. If both are set, annotations take precedence over labels.
Default: "prometheus.io/scrape"
scrape_duration
How often should the scraper run.
To lower memory usage, increase this value.
To raise memory usage, decrease this value.
The impact on memory usage is due to distributing target fetching over the scrape interval to avoid querying (and buffering) all the data at once.
Default is 30s. Valid values include 1s, 15s, 30s, 1m, 5m, etc.
scrape_timeout
The HTTP client timeout when fetching data from endpoints.
Default: 5s. Valid values include 1s, 15s, 30s, 1m, 5m, etc.
worker_threads
Number of worker threads used for scraping targets. Can be increased on environments with a high number of targets or targets with high latency, but might increase memory consumption.
Default: 4. It is not recommended to use more than 10.
require_scrape_enabled_label_for_nodes
Kubernetes
Whether or not Kubernetes nodes need labels to be scraped.
To better support visualization of this data, percentiles are calculated based on the histogram metrics and sent to New Relic. Valid values include 50, 95, and 99.
emitter_proxy
Proxy used by the integration when submitting metrics:
[scheme]://[domain]:[port]
This proxy won't be used when fetching metrics from the targets.
By default this is empty, and no proxy will be used.
emitter_ca_file
Certificate to add to the root CA that the emitter will use when verifying server certificates. If left empty, TLS uses the host's root CA set.
emitter_insecure_skip_verify
Whether the emitter should skip TLS verification when submitting data. Default: false.
disable_autodiscovery
Set to true in order to disable autodiscovery in the k8s cluster. It can be useful when running the Pod with a service account having limited privileges. Default: false.
Configure objects in target key
If you want the target key in the configuration file to contain one or more objects, use the following structure in the YAML list:
Key name
Description
description
A description for the URLs in this target.
urls
A list of strings with the URLs to be scraped.
tls_config
Authentication configuration used to send requests. It supports TLS and Mutual TLS. For more information, see the documentation about mutual TLS authentication.
New Relic's Prometheus OpenMetrics integration automatically discovers which targets to scrape. To specify the port and endpoint path to be used when constructing the target, you can use the prometheus.io/port and prometheus.io/path annotations or label in your Kubernetes pods and services. Annotations take precedence over labels.
If prometheus.io/port is not present, the integration will try to scrape each port or ContainerPort defined for the service.
If prometheus.io/path is not present, the integration will default to /metrics.
If a service is not running on the default /my-metrics-path path, add a label to the pod prometheus.io/path=my-metrics-path. If the path to the metrics endpoint is more complex and cannot be a valid label value (for example, foo/bar), use annotations instead.
In this example, you have a deployment in your cluster, and the pods expose Prometheus metrics on port 8080 and in the path my-metrics.
In the PodSpec metadata of the deployment manifest, set the labels prometheus.io/port: "8080" and prometheus.io/path: "my-metrics". When the integration tries to retrieve the metrics from your pods, it will send a request to http://<pod-ip>:8080/my-metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas:2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
prometheus.io/scrape:"true"
prometheus.io/port:"8080"
prometheus.io/path:"my-metrics"
Services and Endpoints scrape behaviour
By default, services are scraped directly instead of the underlying endpoints since scrape_services is set to true and scrape_endpoints to false.
In order to change this behaviour set scrape_endpoints to true configuring Prometheus OpenMetrics integrations to scrape the underlying endpoints, as Prometheus server natively does, instead of directly the services.
Please notice that depending on the number of endpoints behind the services in the cluster the load and the data ingested can increase considerably, monitor and, if needed, increase resource requirements.
Moreover, even if it is possible to set both scrape_services and scrape_endpoints to true to assure retrocompatibility, it would lead to duplicate data.
Reload the configuration
The Prometheus OpenMetrics integration does not automatically reload the configuration when you make changes to the configuration file.
Docker
To reload the configuration, restart the container running the integration:
bash
$
docker restart nri-prometheus
Kubernetes
To reload the configuration, restart the integration. Recommendation: Scale the deployment down to zero replicas, and then scale it back to one replica: