• /
  • EnglishEspañolFrançais日本語한국어Português
  • EntrarComeçar agora

Gateway YAML configuration reference

This reference covers YAML syntax for advanced users creating custom gateway configurations. For conceptual information, see Gateway overview. For a guided experience, use the Gateway UI. While the Gateway UI is recommended for most users, YAML configuration offers full control over the telemetry pipeline structure.

Complete YAML structure

Gateway configurations use a declarative YAML format:

version: 2.0.0
autoscaling:
minReplicas: 6
maxReplicas: 10
targetCPUUtilizationPercentage: 60
configuration:
simplified/v1:
troubleshooting:
proxy: false
requestTraceLogs: false
steps:
receivelogs:
description: Receive logs from OTLP and New Relic proprietary sources
output:
- probabilistic_sampler/Logs
receivemetrics:
description: Receive metrics from OTLP and New Relic proprietary sources
output:
- filter/Metrics
receivetraces:
description: Receive traces from OTLP and New Relic proprietary sources
output:
- probabilistic_sampler/Traces
probabilistic_sampler/Logs:
description: Probabilistic sampling for all logs
output:
- filter/Logs
config:
global_sampling_percentage: 100
conditionalSamplingRules:
- name: sample the log records for ruby test service
description: sample the log records for ruby test service with 70%
sampling_percentage: 70
source_of_randomness: trace.id
condition: resource.attributes["service.name"] == "ruby-test-service"
probabilistic_sampler/Traces:
description: Probabilistic sampling for traces
output:
- filter/Traces
config:
global_sampling_percentage: 80
filter/Logs:
description: Apply drop rules and data processing for logs
output:
- transform/Logs
config:
error_mode: ignore
logs:
rules:
- name: drop the log records
description: drop all records which has severity text INFO
value: log.severity_text == "INFO"
filter/Metrics:
description: Apply drop rules and data processing for metrics
output:
- transform/Metrics
config:
error_mode: ignore
metric:
rules:
- name: drop entire metrics
description: delete the metric on basis of humidity_level_metric
value: (name == "humidity_level_metric" and IsMatch(resource.attributes["process_group_id"], "pcg_.*"))
datapoint:
rules:
- name: drop datapoint
description: drop datapoint on the basis of unit
value: (attributes["unit"] == "Fahrenheit" and (IsMatch(attributes["process_group_id"], "pcg_.*") or IsMatch(resource.attributes["process_group_id"], "pcg_.*")))
filter/Traces:
description: Apply drop rules and data processing for traces
output:
- transform/Traces
config:
error_mode: ignore
span:
rules:
- name: delete spans
description: deleting the span for a specified host
value: (attributes["host"] == "host123.example.com" and (IsMatch(attributes["control_group_id"], "pcg_.*") or IsMatch(resource.attributes["control_group_id"], "pcg_.*")))
span_event:
rules:
- name: Drop all the traces span event
description: Drop all the traces span event with name debug event
value: name == "debug_event"
transform/Logs:
description: Transform and process logs
output:
- nrexporter/newrelic
config:
log_statements:
- context: log
name: add new field to attribute
description: for otlp-test-service application add newrelic source type field
conditions:
- resource.attributes["service.name"] == "otlp-java-test-service"
statements:
- set(resource.attributes["source.type"],"otlp")
transform/Metrics:
description: Transform and process metrics
output:
- nrexporter/newrelic
config:
metric_statements:
- context: metric
name: adding a new attributes
description: 'adding a new field into a attributes '
conditions:
- resource.attributes["service.name"] == "payments-api"
statements:
- set(resource.attributes["application.name"], "compute-application")
transform/Traces:
description: Transform and process traces
output:
- nrexporter/newrelic
config:
trace_statements:
- context: span
name: remove the attribute
description: remove the attribute when service name is payment-service
conditions:
- resource.attributes["service.name"] == "payment-service"
statements:
- delete_key(resource.attributes, "service.version")
nrexporter/newrelic:
description: Export to New Relic

Top-level structure

  • version: Configuration format version (currently "2.0.0")
  • autoscaling: Gateway replica scaling configuration
  • configuration.simplified/v1: Simplified abstraction layer for defining telemetry pipelines
  • troubleshooting: Debug settings

Configuration hierarchy

The Gateway configuration follows a directed acyclic graph (DAG) structure where each step defines its behavior and points to the next step in the pipeline using the output field. This creates an explicit data flow: data enters through receivers, flows through processors (transform, filter, sample), and exits through exporters.

Step naming conventions

  • Receivers: receivelogs, receivemetrics, receivetraces
  • Processors: processortype/TelemetryType format:
    • Transform: transform/Logs, transform/Metrics, transform/Traces
    • Filter: filter/Logs, filter/Metrics, filter/Traces
    • Sampling: probabilistic_sampler/Logs, probabilistic_sampler/Traces
  • Exporters: nrexporter

Processor configurations

Gateway supports three primary processor types for transforming, filtering, and sampling telemetry data.

Transform processor

Used for modifying, enriching, or parsing telemetry using OTTL (OpenTelemetry Transformation Language).

Config fields:

  • metric_statements: Array for metric transformations (context: metric)
  • log_statements: Array for log transformations (context: log)
  • trace_statements: Array for trace transformations (context: span)

Filter processor

Used to drop telemetry records based on boolean expressions.

Config fields:

  • logs: Array of OTTL boolean expressions for log filtering
  • spans: Array of OTTL boolean expressions for metric/trace filtering

Sampling processor

Used to implement probabilistic sampling logic.

Config fields:

  • global_sampling_percentage: Default sampling rate (0-100)
  • conditionalSamplingRules: Array of conditional rules
    • name: Rule identifier
    • description: Human-readable description
    • sampling_percentage: Sampling rate for matched data (0-100)
    • source_of_randomness: Field to use for randomness (typically trace.id)
    • condition: Attribute matching expression

Field reference

Top-level fields

FieldTypeRequiredDefault
versionstringYes-
autoscaling.minReplicasintegerNo6
autoscaling.maxReplicasintegerNo10
autoscaling.targetCPUUtilizationPercentageintegerNo60
configuration.simplified/v1objectYes-
troubleshooting.proxybooleanNofalse
troubleshooting.requestTraceLogsbooleanNofalse

Step fields

FieldTypeRequiredDescription
descriptionstringRecommendedHuman-readable description
configobjectConditionalRequired for processors, omit for receivers/exporters
outputarrayYesNext step names (empty [] for exporters)

Field naming conventions

  • Use exact casing from examples (YAML is case-sensitive).

Each step's output array specifies the next step(s).

Validation and deployment

  1. Validate syntax with an YAML linter.
  2. Deploy to non-production environment first.
  3. Confirm telemetry reaches New Relic correctly.
  4. Upload configuration through the gateway UI.

Additional resources

Copyright © 2026 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.