The Varnish Cache on-host integration collects and sends inventory and metrics from your Varnish Cache environment to New Relic so you can monitor its health. We collect metrics at the instance, lock, memory pool, storage, and backend levels.
Read on to install the integration, and to see what data we collect.
Compatibility and requirements
Our integration is compatible with Varnish Cache 1.0 or higher.
Before installing the integration, make sure that you meet the following requirements:
- Install the infrastructure agent.
- Linux distribution or Windows version compatible with our infrastructure agent.
Quick start
Instrument your Varnish Cache environment quickly and send your telemetry data with guided install. Our guided install creates a customized CLI command for your environment that downloads and installs the New Relic CLI and the infrastructure agent.
Ready to get started? Click one of these button to try it out.
Our guided install uses the infrastructure agent to set up the Varnish Cache integration. Not only that, it discovers other applications and log sources running in your environment and then recommends which ones you should instrument.
The guided install works with most setups. But if it doesn't suit your needs, you can find other methods below to get started monitoring your Varnish Cache environment.
Install and activate
To install the Varnish Cache integration:
Additional notes:
- Advanced: It's also possible to install the integration from a tarball file. This gives you full control over the installation and configuration process.
- On-host integrations do not automatically update. For best results, regularly update the integration package and the infrastructure agent.
이 문서가 설치하는데 도움이 되셨나요?
Configure the integration
An integration's YAML-format configuration is where you can place required login credentials and configure how data is collected. Which options you change depend on your setup and preference.
The configuration file has common settings applicable to all integrations like interval
, timeout
, inventory_source
. To read all about these common settings, refer to our Configuration Format document.
중요
If you are still using our legacy configuration/definition files please refer to this document for help.
Specific settings related to Varnish are defined using the env
section of the configuration file. These settings control the connection to your Varnish instance, as well as other security settings and features. The list of valid settings is described in the following section.
Varnish Cache instance settings
The Varnish Cache integration collects both metrics(M) and inventory(I) information. Check the Applies To column below to find which settings can be used for each specific collection:
Setting | Description | Default | Applies To |
---|---|---|---|
INSTANCE_NAME | User defined name to identify data from this instance in New Relic. Required. | N/A | M/I |
PARAMS_CONFIG_FILE | The location of the
| N/A | I |
VARNISH_NAME | Name used when executing the | N/A | M |
METRICS | Set to |
| |
INVENTORY | Set to |
|
The varnish-config.yml
commands accept the following arguments:
The values for these settings can be defined in several ways:
- Adding the value directly in the config file. This is the most common way.
- Replacing the values from environment variables using the
{{}}
notation. This requires infrastructure agent v1.14.0+. Read more here. - Using secrets management. Use this to protect sensitive information, such as passwords that would be exposed in plain text on the configuration file. For more information, see Secrets management.
Labels/Custom attributes
Environment variables can be used to control config settings, such as your , and are then passed through to the infrastructure agent. For instructions on how to use this feature, see Configure the infrastructure agent.
You can further decorate your metrics using labels. Labels allow you to add key/value pairs attributes to your metrics which you can then use to query, filter or group your metrics on.
Our default sample config file includes examples of labels but, as they are not mandatory, you can remove, modify or add new ones of your choice.
labels: env: production role: varnish
Example configuration
Example varnish-config.yml
file configuration:
For more about the general structure of on-host integration configuration, see Configuration.
Find and use data
To find your integration data in New Relic, go to one.newrelic.com > All capabilities > Infrastructure > Third-party services and select one of the Varnish Cache integration links.
In New Relic, Varnish Cache data is attached to the following event type:
VarnishSample
VarnishLockSample
VarnishStorageSample
VarnishMempoolSample
VarnishBackendSample
For more on how to find and use your data, see Understand integration data.
Metric data
The Varnish Cache integration collects the following metric data attributes. Each metric name is prefixed with a category indicator and a period, such as bans.
or main.
.
팁
A number of metrics are calculated as rates (per second) instead of totals as the metric names might suggest. For more details on which metrics are calculated as rates, refer to the spec.csv file.
Varnish sample metrics
These attributes can be found by querying the VarnishSample event types.
Metric | Description |
---|---|
| Number of times the maximum connection has been reached. |
| Number of failed connections to the backed. |
| Number of backend connections that have been recycled. |
| Number of backend connections that have been retried. |
| Number of backend connections reuses. |
| Number of successful backend connections, |
| Number of backend connections that were not attempted due to ‘unhealthy’ backend status. |
| Total number of backend fetches initiated. |
| Total number of backend connection requests made. |
| Counter of bans added to ban list. |
| Number of bans marked ‘completed'. |
| Number of objects killed by bans for cutoff (lurker). |
| Counter of bans deleted from ban list. |
| Count of bans replaced by later identical bans. |
| Extra bytes in persisted ban lists due to fragmentation. |
| Number of objects killed by bans during object lookup. |
| Count of how many tests and objects have been tested against each other during lookup. |
| Number of times the ban-lurker had to wait for lookups. |
| Number of objects killed by the ban-lurker. |
| Count of how many bans and objects have been tested against each other by the ban-lurker. |
| Count of how many tests and objects have been tested against each other during by the ban-lurker. |
| Number of bans using |
| Bytes used by the persisted ban lists. |
| Number of bans which use |
| Count of how many bans and objects have been tested against each other during hash lookup. |
| Count of cache hits with grace. A cache hit with grace is a cache hit where the object is expired. These hits also included in the |
| Number of times an object has been delivered to a client without fetching it from a backend server. |
| Number of times the object was fetched from the backend before delivering it to the client. |
| Number of times a hit object was returned for a miss response. |
| Number of times a hit object was returned for a pass response. |
| Edge Side Includes (ESI) parsing errors (unlock). |
| Edge Side Includes (ESI) parse warnings (unlock). |
| The |
| The |
| The |
| The |
| The |
| The |
| The |
| The |
| The |
| The |
| The |
| Number of critical bit tree-based hash (HCB) inserts. |
| Number of HCB lookups with lock. |
| Number of HCB lookups without lock. |
| Number of times more storage space was needed, but limit was reached. |
| Number of move operations done on the LRU list. |
| Number of least recently used (LRU) objects forcefully evicted from storage to make room for a new object. |
| Number of backends. |
| Count of bans. |
| Number of requests killed after sleep on busy objhdr. |
| Number of requests sent to sleep on busy objhdr. |
| Number of requests woken after sleep on busy objhdr. |
| Number of expired objects. |
| Number of objects mailed to expiry thread. |
| Number of objects received by expiry thread. |
| Number of gunzip operations. |
| Number of test gunzip operations. |
| Number of gzip operations. |
| Number of objectcore structs made. |
| Number of objected structs made. |
| Number of object structs made. |
| Total pass-ed requests seen. |
| Total pipe sessions seen. |
| Number of thread pools. |
| Number of purged objects. |
| Number of purge operations executed. |
| Number of requests dropped. |
| Total number of sessions seen. |
| Length of session queue waiting for threads. |
| Number of times per-thread statistics were summed into the global counters. |
| Total synthethic responses made. |
| Total number of threads. |
| Total number of threads created in all pools. |
| Total number of threads destroyed in all pools. |
| Number of times creating a thread failed. |
| Number of times more threads were needed, but limit was reached in a thread pool. |
| Number of unresurrected objects. |
| The child process uptime, in milliseconds. |
| Number of Varnish Configuration Languages (VCL) available. |
| Number of discarded VCLs. |
| Number of VCL failures. |
| Number of loaded VCLs in total. |
| Number of loaded Varnish modules (VMOD). |
| Number of times the child process has died due to signals. |
| Number of times the child process has produced core dumps. |
| Number of times the child process has been cleanly stopped. |
| Number of times the management process has caught a child panic. |
| Number of times the child process has been started. |
| Number of times the child process has been cleanly stopped. |
| The management process uptime, in milliseconds. |
| Number of client requests received, subject to 400 errors. |
| Number of client requests received, subject to 417 errors |
| Number of HTTP header overflows. |
| Total number of bytes forwarded from clients in pipe sessions. |
| Total number of bytes forwarded to clients in pipe sessions. |
| Total request bytes received for piped sessions. |
| Total request body transmitted, in bytes. |
| Total request headers transmitted, in bytes. |
| Number of good client requests received. |
| Total response body transmitted, in bytes. |
| Total response headers transmitted, in bytes. |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Total number of sessions closed. |
| Total number of sessions closed with errors. |
| Number of sessions dropped for thread. |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of times the |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of sessions queued for thread. |
| Session Read Ahead. |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Number of session closes with the error |
| Count of sessions successfully accepted. |
| Count of sessions silently dropped due to lack of worker thread. |
| Count of failures to accept TCP connection. |
| Number of shared memory (SHM) MTX contentions. |
| Number of SHM cycles through buffer. |
| Number of SHM flushes due to overflow. |
| Number of SHM records. |
| Number of SHM writes. |
| Number of times we ran out of space in |
| Number of times we ran out of space in |
| Delivery failed due to insufficient workspace. |
| Number of times we ran out of space in |
| Number of times we ran out of space in |
Varnish lock sample metrics
These attributes can be found by querying the VarnishLockSample
event type.
Metric | Description |
---|---|
| Count of created locks. |
| Count of destroyed locks. |
| Count of lock operations. |
Varnish storage sample metrics
These attributes can be found by querying the VarnishStorageSample
event type.
Metric | Description |
---|---|
| Number of times the storage has failed to provide a storage segment. |
| Number of total bytes allocated by this storage. |
| Number of storage allocations outstanding. |
| Number of times the storage has been asked to provide a storage segment. |
| Number of bytes left in the storage. |
| Number of total bytes returned to this storage. |
| Number of bytes allocated from the storage. |
Varnish mempool sample metrics
These attributes can be found by querying the VarnishMempoolSample
event type.
Metric | Description |
---|---|
| Allocated size of memory pool, in bytes. |
| Memory pool allocations. |
| Number of memory pools free. |
| Number of memory pools in use. |
| Count in memory pool. |
| Pool ran dry. |
| Recycled from pool. |
| Request size of memory pool, in bytes. |
| Too many for pool. |
| Timed out from pool. |
| Too small to recycle. |
Varnish backend sample metrics
These attributes can be found by querying the VarnishBackendSample
event type.
Metric | Description |
---|---|
| Fetches not attempted due to backend being busy. |
| Number of concurrent connections to the backend. |
| Number of backend connections failed. |
| Number of backend connection opens not attempted. |
| Happy health probes. |
| Fetches not attempted due to backend being unhealthy |
| Total request bytes sent for piped sessions. |
| Total number of bytes forwarded from backend in pipe sessions. |
| Total number of bytes forwarded to backend in pipe sessions. |
| Total backend request body bytes sent. |
| Total backend request header bytes sent. |
| Number of backend requests sent, |
| Total backend response body bytes received. |
| Total backend response header bytes received. |
Inventory data
The Varnish Cache integration captures the configuration parameters. It parses the varnish.params
configuration file for all parameters that are active.
The data is available on the Inventory page, under the config/varnish source. For more about inventory data, see Understand integration data.
Check the source code
This integration is open source software. That means you can browse its source code and send improvements, or create your own fork and build it.