Monitor your NGINX servers running in Kubernetes clusters using the NRDOT collector (recommended) or OpenTelemetry collector contrib to send metrics and telemetry data to New Relic.
This Kubernetes-specific integration automatically discovers NGINX pods in your cluster and collects metrics without manual configuration for each instance. It leverages the OpenTelemetry nginxreceiver and receivercreator to dynamically monitor NGINX performance metrics, connection statistics, and server health across your containerized environment.
Set up NGINX monitoring Choose your preferred collector and follow the steps:
NRDOT collector OpenTelemetry Collector Contrib
NRDOT collector
OpenTelemetry Collector Contrib
Before you begin Ensure you have:
Configure the NRDOT collector Install the NRDOT collector using Kubernetes manifests. Helm chart support is coming soon.
Manifest install After completing the base Kubernetes OpenTelemetry manifest installation, configure NGINX monitoring by following these steps:
Update the collector image to use NRDOT collector.
In both deployment.yaml and daemonset.yaml files in your local rendered directory, update the image to:
image : newrelic/nrdot - collector : latest
Update the deployment-configmap.yaml for NGINX monitoring:
Choose one of the following configuration options based on your monitoring requirements:
NGINX-only monitoring 重要 This option monitors NGINX only and removes other Kubernetes metrics collection. You'll delete additional collectors later to prevent unwanted metric ingestion.
Replace the content under deployment-config.yaml: | with the below NGINX-specific configuration:
auth_type : serviceAccount
watch_observers : [ k8s_observer ]
rule : type == "pod" && labels [ "app" ] == "nginx" && labels [ "role" ] == "reverse - proxy"
endpoint : 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>'
nginx.connections_accepted :
nginx.connections_handled :
nginx.connections_current :
nginx.server.endpoint : 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>'
nginx.port : '<YOUR_STUB_STATUS_PORT>'
send_batch_max_size : 1000
spike_limit_percentage : 25
- set(attributes [ "nginx.display.name" ] , Concat( [
attributes [ "k8s.cluster.name" ] ,
attributes [ "k8s.namespace.name" ] ,
attributes [ "k8s.pod.name" ] ,
- set(attributes [ "nginx.deployment.name" ] , attributes [ "k8s.pod.name" ] )
transform/metadata_nullify :
endpoint : "<YOUR_NEWRELIC_OTLP_ENDPOINT>"
api-key : $ { env : NR_LICENSE_KEY }
extensions : [ health_check , k8s_observer ]
receivers : [ receiver_creator/nginx ]
processors : [ batch , resource/cluster , transform/nginx , transform/metadata_nullify , memory_limiter ]
exporters : [ otlp_http/newrelic ]
Configuration parameters The following table describes the key configuration parameters:
Parameter
Description
<YOUR_STUB_STATUS_PORT>
Replace with your NGINX stub status port (for example 80, 8080)
<YOUR_STUB_STATUS_PATH>
Replace with your NGINX stub status path (for example basic_status)
<CLUSTER_NAME>
Replace with your Kubernetes cluster name for identification in New Relic
<YOUR_NEWRELIC_OTLP_ENDPOINT>
Update with your region's OTLP endpoint. See OTLP endpoint documentation
app and role labels
Pod labels used to identify NGINX pods (update the rule to match your labels)
collection_interval
Interval in seconds to collect metrics. The default value is set to 30s
send_batch_max_size
Maximum number of metrics to batch before sending. The default value is set to 1000
timeout
Timeout in seconds to wait before sending batched metrics. The default value is set to 30s
K8s + NGINX monitoring Add the following sections to your existing deployment-configmap.yaml:
Extensions to add:
auth_type : serviceAccount
Receivers to add:
watch_observers : [ k8s_observer ]
rule : type == "pod" && labels [ "app" ] == "nginx" && labels [ "role" ] == "reverse - proxy"
endpoint : 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>'
nginx.connections_accepted :
nginx.connections_handled :
nginx.connections_current :
nginx.server.endpoint : 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>'
nginx.port : '<YOUR_STUB_STATUS_PORT>'
Processors to add:
- set(attributes [ "nginx.display.name" ] , Concat( [
attributes [ "k8s.cluster.name" ] ,
attributes [ "k8s.namespace.name" ] ,
attributes [ "k8s.pod.name" ] ,
- set(attributes [ "nginx.deployment.name" ] , attributes [ "k8s.pod.name" ] )
transform/metadata_nullify :
Service pipelines to add:
extensions : [ health_check , k8s_observer ]
receivers : [ receiver_creator/nginx ]
processors : [ batch , resource/cluster , transform/nginx , transform/metadata_nullify , memory_limiter ]
exporters : [ otlphttp/newrelic ]
Configuration parameters The following table describes the key configuration parameters:
Parameter
Description
<YOUR_STUB_STATUS_PORT>
Replace with your NGINX stub status port (for example 80, 8080)
<YOUR_STUB_STATUS_PATH>
Replace with your NGINX stub status path (for example basic_status)
<CLUSTER_NAME>
Replace with your Kubernetes cluster name for identification in New Relic
app and role labels
Pod labels used to identify NGINX pods (update the rule to match your labels)
collection_interval
Interval in seconds to collect metrics. The default value is set to 30s
memory_limiter
Processor used in existing Kubernetes configuration to limit memory usage
Apply the updated manifests and restart the deployment.
For NGINX-only monitoring, run these commands:
$ kubectl apply -n newrelic -R -f rendered
$ kubectl delete daemonset nr-k8s-otel-collector-daemonset -n newrelic
$ kubectl delete deployment nr-k8s-otel-collector-kube-state-metrics -n newrelic
$ kubectl rollout restart deployment nr-k8s-otel-collector-deployment -n newrelic
For K8s + NGINX monitoring, run these commands:
$ kubectl apply -n newrelic -R -f rendered
$ kubectl rollout restart deployment nr-k8s-otel-collector-deployment -n newrelic
Helm install Helm chart support for NRDOT collector with NGINX monitoring is coming soon.
Before you begin Ensure you have:
Valid New Relic license key Enable the HTTP stub status module on NGINX pod that needs to be monitored Add labels app and role to each NGINX pod that needs to be monitored Helm installedConfigure the OpenTelemetry Collector Deploy the OpenTelemetry Collector to your Kubernetes cluster using Helm. The collector will automatically discover and scrape metrics from your NGINX pods.
Step 1: Create custom values.yaml configuration Download or create a custom values.yaml file based on the OpenTelemetry Collector values.yaml .
Update the following sections in your values.yaml file:
Set mode to deployment:
Replace the image repository:
repository : otel/opentelemetry - collector - contrib
Configure cluster role:
resources : [ "pods" , "nodes" , "nodes/stats" , "nodes/proxy" ]
verbs : [ "get" , "list" , "watch" ]
resources : [ "replicasets" ]
verbs : [ "get" , "list" , "watch" ]
Configure resource limits:
Replace the entire config section with NGINX monitoring configuration:
auth_type : serviceAccount
watch_observers : [ k8s_observer ]
rule : type == "pod" && labels [ "app" ] == "nginx" && labels [ "role" ] == "reverse - proxy"
endpoint : 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>'
nginx.connections_accepted :
nginx.connections_handled :
nginx.connections_current :
nginx.server.endpoint : 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>'
nginx.port : '<YOUR_STUB_STATUS_PORT>'
- set(attributes [ "nginx.display.name" ] , Concat( [
attributes [ "k8s.cluster.name" ] ,
attributes [ "k8s.namespace.name" ] ,
attributes [ "k8s.pod.name" ] ,
- set(attributes [ "nginx.deployment.name" ] , attributes [ "k8s.pod.name" ] )
transform/metadata_nullify :
endpoint : "<YOUR_NEWRELIC_OTLP_ENDPOINT>"
api-key : "<YOUR_NEW_RELIC_LICENSE_KEY>"
extensions : [ health_check , k8s_observer ]
receivers : [ receiver_creator/nginx ]
processors : [ batch , resource/cluster , transform/nginx , transform/metadata_nullify ]
exporters : [ otlp_http/newrelic ]
Configuration parameters The following table describes the key configuration parameters:
Parameter
Description
<YOUR_STUB_STATUS_PORT>
Replace with your NGINX stub status port (for example 80, 8080)
<YOUR_STUB_STATUS_PATH>
Replace with your NGINX stub status path (for example basic_status)
<CLUSTER_NAME>
Replace with your Kubernetes cluster name for identification in New Relic
<YOUR_NEWRELIC_OTLP_ENDPOINT>
Update with your region's OTLP endpoint. See OTLP endpoint documentation
<YOUR_NEW_RELIC_LICENSE_KEY>
Replace with your New Relic license key
app and role labels
Pod labels used to identify NGINX pods (update the rule to match your labels)
basic_status
NGINX stub status endpoint path (update if using a different path)
collection_interval
Interval in seconds to collect metrics. The default value is set to 30s
send_batch_size
Number of metrics to batch before sending. The default value is set to 1024
timeout
Timeout in seconds to wait before sending batched metrics. The default value is set to 30s
Step 2: Install with Helm Follow the OpenTelemetry Collector Helm chart installation guide to install the collector using your custom values.yaml file.
Example commands:
$ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
$ helm upgrade my-opentelemetry-collector open-telemetry/opentelemetry-collector -f your-custom-values.yaml -n newrelic --create-namespace --install
Step 3: Verify deployment and data collection Verify the pods are running:
$ kubectl get pods -n newrelic --watch
You should see the OpenTelemetry Collector pods in a Running state in the newrelic namespace.
Run an NRQL query in New Relic to verify data collection. Replace the cluster name with your actual cluster name:
WHERE metricName LIKE 'nginx.%'
AND instrumentation . provider = 'opentelemetry'
AND k8s . cluster . name = 'your-cluster-name'
View your data in New Relic Once your setup is complete and data is flowing, you can access your NGINX metrics in New Relic dashboards and create custom alerts.
For complete instructions on accessing dashboards, querying data with NRQL, and creating alerts, see Find and query your NGINX data .
Metrics and attributes reference This integration collects the same core NGINX metrics as the on-host deployment, with additional Kubernetes-specific resource attributes for cluster, namespace, and pod identification.
For complete metrics and attributes reference: See NGINX OpenTelemetry metrics and attributes reference for detailed descriptions of all metrics, types, and resource attributes for Kubernetes deployments.
Next steps Explore related monitoring:
Kubernetes-specific resources: