The New Relic Java agent automatically collects data from Kafka's Java clients library. Because Kafka is a high-performance messaging system that generates a lot of data, you can customize the agent for your app's specific throughput and use cases.
This document explains how to collect and view three types of Kafka data:
팁
We also have a Kafka integration. For details on that, see Kafka monitoring integration.
Requirements
Kafka clients instrumentation is available in Java agent versions 4.12.0 or higher. Kafka streams instrumentation is available in Java Agent versions 8.1.0 or higher. To see all supported Kafka libraries, check the Java compatibility and requirements page. Note that Kafka Streams runs on top of Kafka clients so all of the instrumentation that applies to Kafka clients also applies to Streams.
View Kafka metrics
After installation, the agent automatically reports rich Kafka metrics with information about messaging rates, latency, lag, and more. The Java agent collects all Kafka consumer and producer metrics (but not connect or stream metrics).
To view these metrics, create a custom dashboard:
Go to the New Relic metric explorer.
Use the metric explorer to locate your metrics. Here are some folders where you can find metrics:
Kafka metrics:
MessageBroker/Kafka/Internal/KafkaMetricNameFor example,
request-rate
metric:MessageBroker/Kafka/Internal/consumer-metrics/request-rateKafka Streams:
Kafka/Streams/KafkaStreamsMetricNameFor example,
poll-latency-avg
metric:Kafka/Streams/stream-thread-metrics/poll-latency-avgKafka Connect:
Kafka/Connect/KafkaConnectMetricNameFor example,
connector-count
metric:Kafka/Connect/connect-worker-metrics/connector-count
Add the metrics you want to monitor to a dashboard by clicking Add to dashboard.
팁
For a full list of Kafka consumer, producer, and streams metrics, see the Kafka docs. The metrics in those docs are searchable via JMX. Keep in mind not every metric mentioned in the docs will be exported into New Relic. This could be due to one of these reasons:
- The metric is not actually generated by Kafka clients or Kafka Streams. This may be due to using an older version of clients or Streams or based on how you set up and use your Kafka libraries.
- The metric is not numeric or its value is
NaN
. New Relic only accepts metrics with a numeric value.
Enable Kafka event collection
You can configure the agent to collect event data instead of metric timeslice data (for the difference between metric timeslice and event data, see data collection). This allows you to use NRQL to filter and facet the default Kafka metrics. When enabled, the agent collects one Kafka event every 30 seconds. This event contains all of the the data from Kafka consumer and produce metrics captured since the previous event.
If you are using Kafka Streams, the agent generates a seperate event that contains all of the data from Kafka stream metrics captured since the previous event. The event is also collected every 30 seconds.
중요
The agent records up to 2000 events per harvest cycle, though you can change this value with max_samples_stored
. Kafka event data is included in this pool. If you use the recordCustomEvent()
API call to send custom events to New Relic and you send more than 2000 events, the agent will discard some Kafka or custom events.
To enable Kafka event collection:
Add the
kafka.metrics.as_events.enabled
element to yournewrelic.yml
config file:kafka.metrics.as_events.enabled: trueRestart your JVM.
Use the event explorer to view your Kafka events, located in the
KafkaMetrics
event type. Or, use NRQL to query your events directly. For example:SELECT average('producer-metrics.record-send-rate') from KafkaMetrics SINCE 30 minutes ago timeseriesIf you are querying Kafka Streams metrics, use the
KafkaStreamsMetrics
event type to access streams-specific metrics.
중요
Keep in mind that the limitations on what kind of Kafka metrics you can send to New Relic as timeslice metrics also applies to events. That is, non-numeric and NaN metrics are not included as event attributes.
Enable Kafka node metrics
There is an alternative instrumentation module for Kafka clients that will provide more granularity for Kafka metrics. This instrumentation module is available since agent 8.6.0 and is disabled by default.
To enabled this instrumentation module, you have to disable the existing instrumentation module and enable the new one by adding the following to your newrelic.yml
config file:
class_transformer: kafka-clients-metrics: enabled: false kafka-clients-node-metrics: enabled: true
Enable Kafka config events
The kafka-clients-config
instrumentation module will periodically send events with the contents of your Kafka client configuration. This module is available since agent 8.6.0 and is disabled by default.
To enable kafka-clients-config
add the following to your newrelic.yml
config file:
class_transformer: kafka-clients-config: enabled: true
Enable Kafka Streams transactions
If you're using Kafka Streams, by default we do not enable transactions. This is to prevent unnecessary overhead because Kafka applications tend to have high throughput.
Unlike JMS transactions, Kafka Streams transactions are not processed per record. Instead, a transaction begins when a Kafka consumer polls records and then the resulting data gets processed.
If you do wish to create transactions, you need to enable a kafka-streams-spans
module:
class_transformer: kafka-streams-spans: enabled: true
Enable Kafka Connect transaction
If you're using Kafka Connect, by default we do not enable transactions. This is to prevent unnecessary overhead because Kafka applications tend to have high throughput.
Kafka Connect transactions are recorded for each iteration of the sink/source task. For a sink task, a transaction consists of polling a Kafka consumer, converting each message, and sending data to the target. For a source task, a transaction consists of reading from the target, converting the data into messages, and sending each message with a Kafka producer.
If you do wish to collect these transactions, you need to enable a kafka-connect-spans
module:
class_transformer: kafka-connect-spans: enabled: true
Enable Kafka distributed traces
The Java agent can also collect distributed traces from Kafka clients. Because Kafka Streams runs on top of Kafka clients, the steps to manage distributed tracing also apply. Enabling traces doesn't affect the agent's normal operations; it will still report metric or event data from Kafka.
Impacts and requirements to consider before enabling:
- The instrumentation adds a 150 to 200 byte payload to message headers. If your Kafka messages are very small, traces can add significant processing and storage overhead. This additional payload size could cause Kafka to drop messages if they exceed your Kafka messaging size limit. For this reason, we recommend testing out Kafka distributed traces in a dev environment before enabling them in production.
- Distributed tracing is only available for Kafka client versions 0.11.0.0 or higher.
- If you have not enabled distributed tracing for your app before, read the Transition guide before enabling.
- To propagate W3C trace context via Kafka message headers, see the distributed tracing API usage guide for details on APIs that were released in Java agent 6.4.0. Note that adding additional headers to Kafka messages will further increase the payload size. To see these APIs in action, see Using Java agent trace APIs with Kafka.
- If you're using Kafka Streams, you need to enable a span instrumentation module (refer to the Kafka Streams transaction section). Because a transaction is not recorded per record, accepting distributed trace headers will only work for one record.
The complete process of enabling this is described below but at a high level it involves these basic steps: 1) enable tracing via the agent config, and 2) call the Java agent API to instrument transactions on both the producer and consumer side.
To collect distributed traces from Kafka: