Environmental variables allow you to fine-tune the synthetics job manager configuration to meet your specific environmental and functional needs.
The variables are provided at startup using the -e, --env argument.
The following table shows all the environment variables that synthetics job manager supports. PRIVATE_LOCATION_KEY is required, and all other variables are optional.
Name
Description
PRIVATE_LOCATION_KEY
Required. Private location key, as found on the Private Location entity list.
DOCKER_API_VERSION
Format: "vX.Y" API version to be used with the given Docker service.
Default: v1.35.
DOCKER_HOST
Points the synthetics job manager to a given DOCKER_HOST. If absent, the default value is /var/run/docker.sock.
HORDE_API_ENDPOINT
For US-based accounts, the endpoint is: https://synthetics-horde.nr-data.net.
For EU-based accounts, the endpoint is: https://synthetics-horde.eu01.nr-data.net/
Ensure your synthetics job manager can connect to the appropriate endpoint in order to serve your monitor.
DOCKER_REGISTRY
The Docker Registry domain where the runtime images are hosted. Use this to override docker.io as the default.
DOCKER_REPOSITORY
The Docker repository or organization where the runtime images are hosted. Use this to override newrelic as the default.
HORDE_API_PROXY_HOST
Proxy server host used for Horde communication. Format: "localhost".
HORDE_API_PROXY_PORT
Proxy server port used for Horde communication. Format: 8888.
HORDE_API_PROXY_USERNAME
Proxy server username used for Horde communication. Format: "username".
HORDE_API_PROXY_PW
Proxy server password used for Horde communication. Format: "password".
HORDE_API_PROXY_ACCEPT_SELF_SIGNED_CERT
Accept self signed proxy certificates for the proxy server connection used for Horde communication? Acceptable values: true
CHECK_TIMEOUT
The maximum amount of seconds that your monitor checks are allowed to run. This value must be an integer between 0 seconds (excluded) and 900 seconds (included) (for example, from 1 second to 15 minutes).
Default: 180 seconds
LOG_LEVEL
Default: INFO.
Additional options: WARN, ERROR, DEBUG
HEAVYWEIGHT_WORKERS
The number of concurrent heavyweight jobs (Browser/Scripted Browser and Scripted API) that can run at one time.
Default: Available CPUs - 1.
DESIRED_RUNTIMES
An array that may be used to run specific runtime images. Format: ['newrelic/synthetics-ping-runtime:latest','newrelic/synthetics-node-api-runtime:latest','newrelic/synthetics-node-browser-runtime:latest']
Default: all latest runtimes.
VSE_PASSPHRASE
If set, enables verified script execution and uses this value as a passphrase.
USER_DEFINED_VARIABLES
A locally hosted set of user defined key value pairs.
ENABLE_WASM
If set, enables webassembly for node browser runtime. To use webassembly, your synthetics job manager minimum version should be release-367 or higher and node browser runtime version should be 2.3.21 or higher.
The variables are provided at startup using the -e, --env argument.
The following table displays all the environment variables that synthetics job manager supports. PRIVATE_LOCATION_KEY is required, and all other variables are optional. To run the synthetics job manager in a Podman environment, the minimum version should be release-418 or higher.
Name
Description
PRIVATE_LOCATION_KEY
Required. Private location key, as found on the Private Location entity list.
HORDE_API_ENDPOINT
For US-based accounts, the endpoint is: https://synthetics-horde.nr-data.net.
For EU-based accounts, the endpoint is: https://synthetics-horde.eu01.nr-data.net/
Ensure your synthetics job manager can connect to the appropriate endpoint in order to serve your monitor.
PODMAN_API_SERVICE_HOST
The host entry added to the Pod created where the SJM is going to run. Use this to override podman.service as the default.
PODMAN_API_SERVICE_PORT
The port at which the Podman LibPod RESTful API service is running in the instance. Use this to override 8000 as the default.
PODMAN_API_VERSION
The specific version of the Podman LibPod RESTful API being used. Use this to override v5.0.0 as the default.
PODMAN_POD_NAME
The name of the pod in which the SJM container is run. Use this to override SYNTHETICS as the default.
DOCKER_REGISTRY
The Docker Registry domain where the runtime images are hosted. Use this to override docker.io as the default.
DOCKER_REPOSITORY
The Docker repository or organization where the runtime images are hosted. Use this to override newrelic as the default.
HORDE_API_PROXY_HOST
Proxy server host used for Horde communication. Format: "localhost".
HORDE_API_PROXY_PORT
Proxy server port used for Horde communication. Format: 8888.
HORDE_API_PROXY_USERNAME
Proxy server username used for Horde communication. Format: "username".
HORDE_API_PROXY_PW
Proxy server password used for Horde communication. Format: "password".
HORDE_API_PROXY_ACCEPT_SELF_SIGNED_CERT
Accept self signed proxy certificates for the proxy server connection used for Horde communication? Acceptable values: true
CHECK_TIMEOUT
The maximum amount of seconds that your monitor checks are allowed to run. This value must be an integer between 0 seconds (excluded) and 900 seconds (included) (for example, from 1 second to 15 minutes).
Default: 180 seconds
LOG_LEVEL
Default: INFO.
Additional options: WARN, ERROR, DEBUG
HEAVYWEIGHT_WORKERS
The number of concurrent heavyweight jobs (Browser/Scripted Browser and Scripted API) that can run at one time.
Default: Available CPUs - 1.
DESIRED_RUNTIMES
An array that may be used to run specific runtime images. Format: ['newrelic/synthetics-ping-runtime:latest','newrelic/synthetics-node-api-runtime:latest','newrelic/synthetics-node-browser-runtime:latest']
Default: all latest runtimes.
VSE_PASSPHRASE
If set, enables verified script execution and uses this value as a passphrase.
USER_DEFINED_VARIABLES
A locally hosted set of user defined key value pairs.
ENABLE_WASM
If set, enables webassembly for node browser runtime. To use webassembly, your synthetics job manager minimum version should be release-367 or higher and node browser runtime version should be 2.3.21 or higher.
The variables are provided at startup using the --set argument.
The following list shows all the environment variables that synthetics job manager supports. synthetics.privateLocationKey is required, and all other variables are optional.
A number of additional advanced settings are available and fully documented in our Helm chart README
Name
Description
synthetics.privateLocationKey
Required if synthetics.privateLocationKeySecretName is not set. Private location key of the private location, as found on the private location web page.
synthetics.privateLocationKeySecretName
Required if synthetics.privateLocationKey is not set. Name of the Kubernetes secret that contains the key privateLocationKey, which contains the authentication key associated with your synthetics private location.
imagePullSecrets
The name of the secret object used to pull an image from a specified container registry.
fullnameOverride
Name override used for your Deployment, replacing the default.
appVersionOverride
Release version of synthetics-job-manager to use instead of the version specified in chart.yml.
synthetics.logLevel
Default: INFO.
Additional options: WARN, ERROR
synthetics.hordeApiEndpoint
For US-based accounts, the endpoint is: https://synthetics-horde.nr-data.net.
For EU-based accounts, the endpoint is: https://synthetics-horde.eu01.nr-data.net/
Ensure your synthetics job manager can connect to the appropriate endpoint in order to serve your monitor.
synthetics.minionDockerRunnerRegistryEndpoint
The Docker Registry and Organization where the Minion Runner image is hosted. Use this to override quay.io/newrelic as the default (for example, docker.io/newrelic)
synthetics.vsePassphrase
If set, it enables verified script execution, and uses this value as a passphrase.
synthetics.vsePassphraseSecretName
If set, enables verified script execution and uses this value to retrieve the passphrase from a Kubernetes secret with a key called vsePassphrase.
synthetics.enableWasm
If set, enables webassembly for node browser runtime. To use webassembly, your synthetics job manager minimum version should be release-367 or higher and node browser runtime version should be 2.3.21 or higher.
synthetics.apiProxyHost
Proxy server used for Horde communication. Format: "host".
synthetics.apiProxyPort
Proxy server port used for Horde communication. Format: port.
synthetics.hordeApiProxySelfSignedCert
Accept self signed certificates when using a proxy server for Horde communication. Acceptable values: true.
synthetics.hordeApiProxyUsername
Proxy server username for Horde communication. Format: "username"
synthetics.hordeApiProxyPw
Proxy server password for Horde communication. Format: "password".
synthetics.userDefinedVariables.userDefinedJson
A JSON string of user-defined variables. The user may access these variables in their script. Format: '{"key":"value","key2":"value2"}'.
synthetics.userDefinedVariables.userDefinedFile
A path local to the user to a JSON file containing user-defined variables. This is passed in via --set-file and cannot be set in the Values file.
synthetics.userDefinedVariables.userDefinedPath
A path on the user's provided PersistentVolume to the user_defined_variables.json file. User must provide a PersistentVolume or PersistentVolumeClaim if this variable is populated.
synthetics.persistence.existingClaimName
If mounting a volume, the user may provide a name for a PersistentVolumeClaim that already exists in the cluster. Presumes the existence of an corresponding PersistentVolume.
synthetics.persistence.existingVolumeName
If mounting a volume and not providing a PersistentVolumeClaim, the user must at minimum provide a PersistentVolume name. Helm will generate a PersistentVolumeClaim.
synthetics.persistence.storageClass
The name of the StorageClass for the generated PersistentVolumeClaim. This should match the StorageClassName on the existing PV. If not providers, Kubernetes will use the default storage class if present.
synthetics.persistence.size
The size of the volume for the generated PersistentVolumeClaim. Format: 10Gi. Default 2Gi.
global.checkTimeout
The maximum amount of seconds that your monitor checks are allowed to run. This value must be an integer between 0 seconds (excluded) and 900 seconds (included) (for example, from 1 second to 15 minutes).
Default: 180 seconds
image.repository
The container to pull.
Default: docker.io/newrelic/synthetics-job-runner
image.pullPolicy
The pull policy.
Default: IfNotPresent
podSecurityContext
Set a custom security context for the synthetics-job-manager pod.
ping-runtime.enabled
Whether or not the persistent ping runtime should be deployed. This can be disabled if you do not use ping monitors.
Default: true
ping-runtime.replicaCount
The number of ping runtime containers to deploy. Increase the replicaCount to scale the deployment based on your ping monitoring needs.
Whether or not the Node.js API runtime should be deployed. This can be disabled if you do not use scripted API monitors.
Default: true
node-api-runtime.parallelism
The number of Node.js API runtime CronJobs to deploy. The maximum number of concurrent Node.js API jobs that will execute at any time. Additional details.
Default: 1
node-api-runtime.completions
The number of Node.js API runtime CronJobs to complete per minute. Increase this setting along with parallelism to improve throughput. This should be increased any time parallelism is increased and completions should always be at least greater than or equal to parallelism. . Increase this setting if you notice periods of time with no API runtime jobs running. Additional details.
Default: 6
node-api-runtime.image.repository
The container image to pull for the Node.js API runtime.
The pull policy for the Node.js API runtime container.
Default: IfNotPresent
node-browser-runtime.enabled
Whether or not the Node.js browser runtime should be deployed. This can be disabled if you do not use simple or scripted browser monitors.
Default: true
node-browser-runtime.parallelism
The number of Chrome browser runtime CronJobs to deploy. The maximum number of concurrent Chrome browser jobs that will execute at any time. Additional details.
Default: 1
node-browser-runtime.completions
The number of Chrome browser runtime CronJobs to complete per minute. Increase this setting along with parallelism to improve throughput. This should be increased any time parallelism is increased and completions should always be at least greater than or equal to parallelism. Increase this setting if you notice periods of time with no browser runtime jobs running. Additional details.
Default: 6
node-browser-runtime.image.repository
The container image to pull for the Node.js browser runtime.
The pull policy for the Node.js browser runtime container.
Default: IfNotPresent
The variables are provided at startup using the --set argument.
The following list shows all the environment variables that synthetics job manager supports. synthetics.privateLocationKey is required, and all other variables are optional.
A number of additional advanced settings are available and fully documented in our Helm chart README
The name of the secret object used to pull an image from a specified container registry.
fullnameOverride
Name override used for your Deployment, replacing the default.
appVersionOverride
Release version of synthetics-job-manager to use instead of the version specified in chart.yml.
synthetics.logLevel
Default: INFO.
Additional options: WARN, ERROR
synthetics.hordeApiEndpoint
For US-based accounts, the endpoint is: https://synthetics-horde.nr-data.net.
For EU-based accounts, the endpoint is: https://synthetics-horde.eu01.nr-data.net/
Ensure your synthetics job manager can connect to the appropriate endpoint in order to serve your monitor.
synthetics.vsePassphrase
If set, it enables verified script execution, and uses this value as a passphrase.
synthetics.vsePassphraseSecretName
If set, enables verified script execution and uses this value to retrieve the passphrase from a Kubernetes secret with a key called vsePassphrase.
synthetics.enableWasm
If set, enables webassembly for node browser runtime. To use webassembly, your synthetics job manager minimum version should be release-367 or higher and node browser runtime version should be 2.3.21 or higher.
synthetics.apiProxyHost
Proxy server used for Horde communication. Format: "host".
synthetics.apiProxyPort
Proxy server port used for Horde communication. Format: port.
synthetics.hordeApiProxySelfSignedCert
Accept self signed certificates when using a proxy server for Horde communication. Acceptable values: true.
synthetics.hordeApiProxyUsername
Proxy server username for Horde communication. Format: "username"
synthetics.hordeApiProxyPw
Proxy server password for Horde communication. Format: "password".
synthetics.userDefinedVariables.userDefinedJson
A JSON string of user-defined variables. The user may access these variables in their script. Format: '{"key":"value","key2":"value2"}'.
synthetics.userDefinedVariables.userDefinedFile
A path local to the user to a JSON file containing user-defined variables. This is passed in via --set-file and cannot be set in the Values file.
synthetics.userDefinedVariables.userDefinedPath
A path on the user's provided PersistentVolume to the user_defined_variables.json file. User must provide a PersistentVolume or PersistentVolumeClaim if this variable is populated.
global.persistence.existingClaimName
If mounting a volume, the user may provide a name for a PersistentVolumeClaim that already exists in the cluster. Presumes the existence of an corresponding PersistentVolume.
global.persistence.existingVolumeName
If mounting a volume and not providing a PersistentVolumeClaim, the user must at minimum provide a PersistentVolume name. Helm will generate a PersistentVolumeClaim.
global.persistence.storageClass
The name of the StorageClass for the generated PersistentVolumeClaim. This should match the StorageClassName on the existing PV. If not providers, Kubernetes will use the default storage class if present.
global.persistence.size
The size of the volume for the generated PersistentVolumeClaim. Format: 10Gi. Default 2Gi.
global.checkTimeout
The maximum amount of seconds that your monitor checks are allowed to run. This value must be an integer between 0 seconds (excluded) and 900 seconds (included) (for example, from 1 second to 15 minutes).
Default: 180 seconds
image.repository
The container to pull.
Default: docker.io/newrelic/synthetics-job-runner
image.pullPolicy
The pull policy.
Default: IfNotPresent
podSecurityContext
Set a custom security context for the synthetics-job-manager pod.
ping-runtime.enabled
Whether or not the persistent ping runtime should be deployed. This can be disabled if you do not use ping monitors.
Default: true
ping-runtime.replicaCount
The number of ping runtime containers to deploy. Increase the replicaCount to scale the deployment based on your ping monitoring needs.
Whether or not the Node.js API runtime should be deployed. This can be disabled if you do not use scripted API monitors.
Default: true
node-api-runtime.parallelism
The number of Node.js API runtime CronJobs to deploy. The maximum number of concurrent Node.js API jobs that will execute at any time. Additional details.
Default: 1
node-api-runtime.completions
The number of Node.js API runtime CronJobs to complete per minute. Increase this setting along with parallelism to improve throughput. This should be increased any time parallelism is increased and completions should always be at least greater than or equal to parallelism. . Increase this setting if you notice periods of time with no API runtime jobs running. Additional details.
Default: 6
node-api-runtime.image.repository
The container image to pull for the Node.js API runtime.
The pull policy for the Node.js API runtime container.
Default: IfNotPresent
node-browser-runtime.enabled
Whether or not the Node.js browser runtime should be deployed. This can be disabled if you do not use simple or scripted browser monitors.
Default: true
node-browser-runtime.parallelism
The number of Chrome browser runtime CronJobs to deploy. The maximum number of concurrent Chrome browser jobs that will execute at any time. Additional details.
Default: 1
node-browser-runtime.completions
The number of Chrome browser runtime CronJobs to complete per minute. Increase this setting along with parallelism to improve throughput. This should be increased any time parallelism is increased and completions should always be at least greater than or equal to parallelism. Increase this setting if you notice periods of time with no browser runtime jobs running. Additional details.
Default: 6
node-browser-runtime.image.repository
The container image to pull for the Node.js browser runtime.
The pull policy for the Node.js browser runtime container.
Default: IfNotPresent
User-defined variables for scripted monitors
Private synthetics job managers let you configure environment variables for scripted monitors. These variables are managed locally on the SJM and can be accessed via $env.USER_DEFINED_VARIABLES. You can set user-defined variables in two ways. You can mount a JSON file or you can supply an environment variable to the SJM on launch. If both are provided, the SJM will only use values provided by the environment.
The user may create a JSON-formatted file and mount the volume where the file is located to a specified target path in the SJM container.
The file must have read permissions and contain a JSON-formatted map. Example user-defined variables file:
{
"KEY":"VALUE",
"user_name":"MINION",
"my_password":"PASSW0RD123",
"my_URL":"https://newrelic.com/",
"ETC":"ETC"
}
Place the file in the source directory on the host. The SJM is expecting the file name to be user_defined_variables.json
Docker example:
The expected target directory is: /var/lib/newrelic/synthetics/variables/
bash
$
docker run ... -v /variables:/var/lib/newrelic/synthetics/variables:rw ...
Podman example:
In case of SELinux, mount the volume additionally with :z or :Z. For more information, refer Podman documentation. The expected target directory is: /var/lib/newrelic/synthetics/variables/
bash
$
podman run ... -v /variables:/var/lib/newrelic/synthetics/variables:rw,z ...
Kubernetes example:
The user has two options when providing a file to the SJM pod in Kubernetes. They may:
Pass in a local file.
Provide a PersistentVolume that includes the user_defined_variables.json.
Pass in a local file
This option creates a ConfigMap Kubernetes resource and mounts that to the SJM pod.
This option requires the user to provide a PersistentVolume that includes the user_defined_variables.json file or a PersistentVolumeClaim to the same. For more details on helm chart installation using a PersistentVolume, follow the instructions at permanent data storage.
Once the user has prepared a PersistentVolume as described below, launch the SJM, setting the path where the user_defined_variables.json file is located, and set any other synthetics.persistence variables as necessary.
Accessing user-defined environment variables from scripts
To reference a configured user-defined environment variable, use the reserved $env.USER_DEFINED_VARIABLES followed by the name of a given variable with dot notation (for example, $env.USER_DEFINED_VARIABLES.MY_VARIABLE).
Caution
User-defined environment variables are not sanitized from logs. Consider using the secure credentials feature for sensitive information.
Custom node modules
Custom node modules are provided in both CPM and SJM. They allow you to create a customized set of node modules and use them in scripted monitors (scripted API and scripted browser) for synthetic monitoring.
Set up your custom modules directory
Create a directory with a package.json file following npm official guidelines in the root folder. The SJM will install any dependencies listed in the package.json's dependencies field. These dependencies will be available when running monitors on the private synthetics job manager. See an example of this below.
Example
In this example, a custom module directory is used with the following structure:
/example-custom-modules-dir/
├── counter
│ ├── index.js
│ └── package.json
└── package.json ⇦ the only mandatory file
The package.json defines dependencies as both a local module (for example, counter) and any hosted modules (for example, smallest version 1.0.1):
Add your custom modules directory to the SJM for Docker, Podman, or Kubernetes
For Docker, launch SJM mounting the directory at /var/lib/newrelic/synthetics/modules. For example:
bash
$
docker run ... -v /example-custom-modules-dir:/var/lib/newrelic/synthetics/modules:rw ...
For podman, launch SJM mounting the directory at /var/lib/newrelic/synthetics/modules. In case of SELinux, mount the volume additionally with with :z or :Z. For more information, refer Podman documentation. For example:
bash
$
podman run ... -v /example-custom-modules-dir:/var/lib/newrelic/synthetics/modules:rw,z ...
For Kubernetes, the directory at /var/lib/newrelic/synthetics/modules needs to exist on a PV prior to launching the SJM with custom modules enabled.
Tip
The PV access mode should be ReadWriteMany if you need to share storage across multiple pods.
One method is to create a pod that mounts the PV just for the purpose of copying your custom modules directory to the PV. The following example uses Amazon EFS with Amazon EKS:
Create the namespace, persistent volume, and persistent volume claim
Make sure you've already set up your EFS filesystem and installed the EFS CSI driver on your cluster. You will also need your EFS filesystem ID for the PV's spec.csi.volumeHandle.
bash
$
kubectl apply -f - <<EOF
$
apiVersion: v1
$
kind: Namespace
$
metadata:
$
name: newrelic
$
$
---
$
kind: StorageClass
$
apiVersion: storage.k8s.io/v1
$
metadata:
$
name: efs-sc
$
provisioner: efs.csi.aws.com
$
$
---
$
apiVersion: v1
$
kind: PersistentVolume
$
metadata:
$
name: custom-modules-pvc
$
spec:
$
capacity:
$
storage: 5Gi
$
volumeMode: Filesystem
$
accessModes:
$
- ReadWriteMany
$
persistentVolumeReclaimPolicy: Retain
$
storageClassName: efs-sc
$
csi:
$
driver: efs.csi.aws.com
$
volumeHandle: <your-efs-filesystem-id>
$
$
---
$
apiVersion: v1
$
kind: PersistentVolumeClaim
$
metadata:
$
name: custom-modules-pvc
$
namespace: newrelic
$
spec:
$
accessModes:
$
- ReadWriteMany
$
storageClassName: efs-sc
$
resources:
$
requests:
$
storage: 5Gi
$
EOF
Switch to the newrelic namespace in your ~/.kube/config.
Check that /var/lib/newrelic/synthetics/modules/custom-modules/package.json exists on the PV.
bash
$
kubectl exec-it mount-custom-mods-pod -- bash
root@mount-custom-mods-pod:/# cd /var/lib/newrelic/synthetics/modules/
root@mount-custom-mods-pod:/var/lib/newrelic/synthetics/modules# ls -l
total 4
drwxr-xr-x 2 root root 6144 Jun 29 03:49 custom-modules
root@mount-custom-mods-pod:/var/lib/newrelic/synthetics/modules# ls -l custom-modules/
total 4
-rw-r--r-- 1 501 staff 299 Jun 29 03:49 package.json
Launch the SJM with custom modules feature enabled
Set values for persistence.existingClaimName and customNodeModules.customNodeModulesPath either in the command line or in a YAML file during installation. The customNodeModules.customNodeModulesPath value should specify the subpath on the Persistent Volume where your custom modules files exist. For example:
To check if the modules were installed correctly or if any errors occurred, look for the following lines in the synthetics-job-managercontainer or pod logs:
2024-06-29 03:51:28,407{UTC} [main] INFO c.n.s.j.p.options.CustomModules - Detected mounted path for custom node modules
2024-06-29 03:51:28,408{UTC} [main] INFO c.n.s.j.p.options.CustomModules - Validating permission for custom node modules package.json file
2024-06-29 03:51:28,409{UTC} [main] INFO c.n.s.j.p.options.CustomModules - Installing custom node modules...
Now you can add "require('smallest');" into the script of monitors you send to this private location.
Change package.json for custom modules
In addition to local and hosted modules, you can utilize Node.js modules as well. To update the custom modules used by your SJM, make changes to the package.json file, and restart the SJM. During the reboot process, the SJM will recognize the configuration change and automatically perform cleanup and re-installation operations to ensure the updated modules are applied.
Caution
Local modules: While your package.json can include any local module, these modules must reside inside the tree under your custom module directory. If stored outside the tree, the initialization process will fail and you will see an error message in the docker logs after launching SJM.
Permanent data storage
Users may want to use permanent data storage to provide the user_defined_variables.json file or support custom node modules.
Docker
To set permanent data storage on Docker:
Create a directory on the host where you are launching the Job Manager. This is your source directory.
Launch the Job Manager, mounting the source directory to the target directory /var/lib/newrelic/synthetics.
Example:
bash
$
docker run ... -v /sjm-volume:/var/lib/newrelic/synthetics:rw ...
Podman
To set permanent data storage on Podman:
Create a directory on the host where you are launching the Job Manager. This is your source directory.
Launch the Job Manager, mounting the source directory to the target directory /var/lib/newrelic/synthetics.
Example:
bash
$
podman run ... -v /sjm-volume:/var/lib/newrelic/synthetics:rw,z ...
Kubernetes
To set permanent data storage on Kubernetes, the user has two options:
Provide an existing PersistentVolumeClaim (PVC) for an existing PersistentVolume (PV), setting the synthetics.persistence.existingClaimName configuration value.
Example:
Provide an existing PersistentVolume (PV) name, setting the synthetics.persistence.existingVolumeName configuration value. Helm will generate a PVC for the user.
The user may optionally set the following values as well:
synthetics.persistence.storageClass: The storage class of the existing PV. If not provided, Kubernetes will use the default storage class.
synthetics.persistence.size: The size for the claim. If not set, the default is currently 2Gi.
To ensure your private location runs efficiently, you must provision enough CPU resources on your host to handle your monitoring workload. Many factors impact sizing, but you can quickly estimate your needs. You'll need 1 CPU core for each heavyweight monitor (i.e., simple browser, scripted browser, or scripted API monitor). Below are two formulas to help you calculate the number of cores you need, whether you're diagnosing a current setup or planning for a future one.
Formula 1: Diagnosing an Existing Location
If your current private location is struggling to keep up and you suspect jobs are queuing, use this formula to find out how many cores you actually need. It's based on the observable performance of your system.
Creq=(Rproc+Rgrowth)⋅Davg,m
Creq = Required CPU Cores.
Rproc = The rate of heavyweight jobs being processed per minute.
Rgrowth = The rate your jobManagerHeavyweightJobs queue is growing per minute.
Davg,m = The average duration of heavyweight jobs in minutes.
This formula calculates your true job arrival rate by adding the jobs your system is processing to the jobs that are piling up in the queue. Multiplying this total load by the average job duration tells you exactly how many cores you need to clear all the work without queuing.
Formula 2: Forecasting a New or Future Location
If you're setting up a new private location or planning to add more monitors, use this formula to forecast your needs ahead of time.
Creq=Nmon⋅Davg,m⋅P1avg,m
Creq = Required CPU Cores.
Nmon = The total number of heavyweight monitors you plan to run.
Davg,m = The average duration of a heavyweight job in minutes.
Pavg,m = The average period for heavyweight monitors in minutes (e.g., a monitor that runs every 5 minutes has Pavg,m=5).
This calculates your expected workload from first principles: how many monitors you have, how often they run, and how long they take.
Important sizing factors
When using these formulas, remember to account for these factors:
Job duration (Davg,m): Your average should include jobs that time out (often ~3 minutes), as these hold a core for their entire duration.
Job failures and retries: When a monitor fails, it's automatically retried. These retries are additional jobs that add to the total load. A monitor that consistently fails and retries effectively multiplies its period, significantly impacting throughput.
Scaling out: In addition to adding more cores to a host (scaling up), you can deploy additional synthetics job managers with the same private location key to load balance jobs across multiple environments (scaling out).
It's important to note that a single Synthetics Job Manager (SJM) has a throughput limit of approximately 15 heavyweight jobs per minute. This is due to an internal threading strategy that favors the efficient competition of jobs across multiple SJMs over the raw number of jobs processed per SJM. If your calculations indicate a need for higher throughput, you must scale out by deploying additional SJMs. You can check if your job queue is growing to determine if more SJMs are needed.
Adding more SJMs with the same private location key provides several advantages:
Load balancing: Jobs for the private location are distributed across all available SJMs.
Failover protection: If one SJM instance goes down, others can continue processing jobs.
Higher total throughput: The total throughput for your private location becomes the sum of the throughput from each SJM (e.g., two SJMs provide up to ~30 jobs/minute).
NRQL queries for diagnosis
You can run these queries in the query builder to get the inputs for the diagnostic formula. Make sure to set the time range to a long enough period to get a stable average.
1. Find the rate of jobs processed per minute (Rproc):
This query counts the number of non-ping (heavyweight) jobs completed over the last day and shows the average rate per minute.
FROM SyntheticCheck
SELECT rate(uniqueCount(id),1minute)AS'job rate per minute'
WHERE location ='YOUR_PRIVATE_LOCATION'AND typeLabel !='Ping'
SINCE 1day ago
2. Find the rate of queue growth per minute (Rgrowth):
This query calculates the average per-minute growth of the jobManagerHeavyweightJobs queue on a time series chart. A line above zero indicates the queue is growing, while a line below zero means it's shrinking.
FROM SyntheticsPrivateLocationStatus
SELECT derivative(jobManagerHeavyweightJobs,1minute)AS'queue growth rate per minute'
WHERE name ='YOUR_PRIVATE_LOCATION'
TIMESERIES SINCE 1day ago
Tip
Make sure to select the account where the private location exists. It's best to view this query as a time series because the derivative function can vary wildly. The goal is to get an estimate of the rate of queue growth per minute. Play with different time ranges to see what works best.
3. Find total number of heavyweight monitors (Nmon):
This query finds the unique count of heavyweight monitors.
FROM SyntheticCheck
SELECT uniqueCount(monitorId)AS'monitor count'
WHERE location ='YOUR_PRIVATE_LOCATION'AND typeLabel !='Ping'
SINCE 1day ago
4. Find average job duration in minutes (Davg,m):
This query finds the average execution duration of completed non-ping jobs and converts the result from milliseconds to minutes. executionDuration represents the time the job took to execute on the host.
WHERE location ='YOUR_PRIVATE_LOCATION'AND typeLabel !='Ping'
SINCE 1day ago
5. Find average heavyweight monitor period (Pavg,m):
If the private location's jobManagerHeavyweightJobs queue is growing, it isn't accurate to calculate the average monitor period from existing results. This will need to be estimated from the list of monitors on the Synthetic Monitors page. Make sure to select the correct New Relic account and you may need to filter by privateLocation.
Tip
Synthetic monitors may exist in multiple sub accounts. If you have more sub accounts than can be selected in the query builder, choose the accounts with the most monitors.
Note about ping monitors and the pingJobs queue
Ping monitors are different. They are lightweight jobs that do not consume a full CPU core each. Instead, they use a separate queue (pingJobs) and run on a pool of worker threads.
While they are less resource-intensive, a high volume of ping jobs, especially failing ones, can still cause performance issues. Keep these points in mind:
Resource model: Ping jobs utilize worker threads, not dedicated CPU cores. The core-per-job calculation does not apply to them.
Timeout and retry: A failing ping job can occupy a worker thread for up to 60 seconds. It first attempts an HTTP HEAD request (30-second timeout). If that fails, it immediately retries with an HTTP GET request (another 30-second timeout).
Scaling: Although the sizing formula is different, the same principles apply. To handle a large volume of ping jobs and keep the pingJobs queue from growing, you may need to scale up and/or scale out. Scaling up means increasing cpu and memory resources per host or namespace. Scaling out means adding more instances of the ping runtime. This can be done by deploying more job managers on more hosts, in more namespaces, or even within the same namespace. Alternatively, the ping-runtime in Kubernetes allows you to set a larger number of replicas per deployment.
Sizing considerations for Kubernetes and OpenShift
Each runtime used by the Kubernetes and OpenShift synthetic job manager can be sized independently by setting values in the helm chart. The node-api-runtime and node-browser-runtime are sized independently using a combination of the parallelism and completions settings.
The parallelism setting controls how many pods of a particular runtime run concurrently.
The completions setting controls how many pods must complete before the CronJob starts another Kubernetes Job for that runtime.
How to Size Your Deployment: A Step-by-Step Guide
Your goal is to configure enough parallelism to handle your job load without exceeding the throughput limit of your SJM instances.
Step 1: Estimate Your Required Workload
Completions: This determines how many runtime pods should complete before a new Kubernetes Job is started.
First, determine your private location's average job execution duration and job rate. Use executionDuration as it most accurately reflects the pod's active runtime.
-- Get average job execution duration (in seconds)
WHERE typeLabel !='Ping'AND location ='YOUR_PRIVATE_LOCATION'
FACET typeLabel SINCE 1hour ago
Completions=D5avg,m
Where Davg,m is your average job execution duration in seconds.
Required Parallelism: This determines how many workers (pods) you need running concurrently to handle your 5-minute job load.
-- Get jobs per 5 minutes
FROM SyntheticCheck
SELECT rate(uniqueCount(id),5 minutes)AS'N_m'
WHERE typeLabel !='Ping'AND location ='YOUR_PRIVATE_LOCATION'
FACET typeLabel SINCE 1hour ago
Preq=CompletionsNm
Where Nm is your number of jobs per 5 minutes. This Preq value is your target total parallelism.
Step 2: Check Against the Single-SJM Throughput Limit
Max Parallelism: This determines how many workers (pods) your SJM can effectively utilize.
Pmax≈15⋅Davg,m
This Pmax value is your system limit for one SJM Helm deployment.
Tip
The above queries are based on current results. If your private location does not have any results or the job manager is not performing at its best, query results may not be accurate. In that case, start with the examples in the table below and adjust until your queue is stable.
Tip
A key consideration is that a single SJM instance has a maximum throughput of approximately 15 heavyweight jobs per minute. You can calculate the maximum effective parallelism (Pmax) a single SJM can support before hitting this ceiling.
Step 3: Compare, Configure, and Scale
Compare your required parallelism (Preq) from Step 1 to the maximum parallelism (Pmax) from Step 2.
Scenario A:Preq≤Pmax
Diagnosis: Your job load is within the limit of a single SJM instance.
Action:
You will deploy one SJM Helm release.
In your Helm chart values.yaml, set parallelism to your calculated Preq.
Set completions to your calculated Completions. For improved efficiency, this value should typically be 6-10x your parallelism setting.
Scenario B:Preq>Pmax
Diagnosis: Your job load exceeds the ~15 jobs/minute limit of a single SJM.
Action:
You must scale out by deploying multiple, separate SJM Helm releases.
Do not increase the replicaCount in your Helm chart.
Step 4: Monitor Your Queue
After applying your changes, you must verify that your job queue is stable and not growing. A consistently growing queue means your location is still under-provisioned.
Run this query to check the queue's growth rate:
-- Check for queue growth (a positive value means the queue is growing)
If the "Queue Growth Rate" is consistently positive, you need to install more SJM Helm deployments (Scenario B) or re-check your parallelism settings (Scenario A).
Configuration Examples and Tuning
The parallelism setting directly affects how many synthetics jobs per minute can be run. Too small a value and the queue may grow. Too large a value and nodes may become resource constrained.
Example
Description
parallelism=1completions=1
The runtime will execute 1 synthetics job per minute. After 1 job completes, the CronJob configuration will start a new job at the next minute. Throughput will be extremely limited with this configuration.
parallelism=1completions=6
The runtime will execute 1 synthetics job at a time. After the job completes, a new job will start immediately. After 6 jobs complete, the CronJob configuration will start a new Kubernetes Job. Throughput will be limited. A single long-running synthetics job will block the processing of any other synthetics jobs of this type.
parallelism=3completions=24
The runtime will execute 3 synthetics jobs at once. After any of these jobs complete, a new job will start immediately. After 24 jobs complete, the CronJob configuration will start a new Kubernetes Job. Throughput is much better with this or similar configurations.
If your parallelism setting is working well (keeping the queue at zero), setting a higher completions value (e.g., 6-10x parallelism) can improve efficiency by:
Accommodating variability in job durations.
Reducing the number of completion cycles to minimize the "nearing the end of completions" inefficiency where the next batch can't start until the final job from the current batch completes.
It's important to note that the completions value should not be too large or the CronJob will experience warning events like the following:
bash
$
8m40s Warning TooManyMissedTimes cronjob/synthetics-node-browser-runtime too many missed start times: 101. Set or decrease .spec.startingDeadlineSeconds or check clock skew
Tip
New Relic is not liable for any modifications you make to the synthetics job manager files.
Scaling out with multiple SJM deployments
To scale beyond the ~15 jobs/minute throughput of a single SJM, you must install multiple, separate SJM Helm releases.
Important
Do not use replicaCount to scale the job manager pod. You cannot scale by increasing the replicaCount for a single Helm release. The SJM architecture requires a 1:1 relationship between a runtime pod and its parent SJM pod. If runtime pods send results back to the wrong SJM replica (e.g., through a Kubernetes service), those results will be lost.
The correct strategy is to deploy multiple SJM instances, each as its own Helm release. Each SJM will compete for jobs from the same private location, providing load balancing, failover protection, and an increased total job throughput.
Simplified Scaling Strategy
Assuming Preq>Pmax and you need to scale out, you can simplify maintenance by treating each SJM deployment as a fixed-capacity unit.
Set Max Parallelism: For each SJM, set parallelism to the same Pmax value. This maximizes the potential throughput of each SJM.
Set Completions: For each SJM, set completions to a fixed value as well. The Preq formula from Step 1 can be modified to estimate completions by substituting in the Pmax value:
Completions=PNmmax
Where Nm is your number of jobs per 5 minutes. Adjust as needed after deploying to target a 5 minute Kubernetes job age per runtime, i.e., node-browser-runtime and node-api-runtime.
Install Releases: Install as many separate Helm releases as you need to handle your total Preq. For example, if your total Preq is 60 and you've fixed each SJM's parallelism at 20 (Pmax from Step 2), you would need three separate Helm deployments to meet the required job demand.
Monitor and Add: Monitor your job queue (see Step 4). If it starts to grow, simply install another Helm release (e.g., sjm-delta) using the same fixed configuration.
By fixing parallelism and completions to static values based on Pmax, increasing or decreasing capacity becomes a simpler process of adding or removing Helm releases. This helps to avoid wasting cluster resources on a parallelism value that is higher than the SJM can effectively utilize.
Installation Example
When installing multiple SJM releases, you must provide a unique name for each release. All instances must be configured with the same private location key.
Setting the fullnameOverride is highly recommended to create shorter, more manageable resource names. For example, to install two SJMs named sjm-alpha and sjm-beta into the newrelic namespace (both using the same values.yaml with your fixed parallelism and completions):