Policies and Rule Counts
This metric can be used to track the number of policies as well as rules present in the cluster which are currently active and even the ones which are not currently active but were created in the past.
As a cluster administrator, it may benefit you to have monitoring capabilities over both the state and execution of cluster-applied Kyverno policies. This includes monitoring over any applied changes to policies, any activity associated with incoming requests, and any results produced as an outcome. If enabled, monitoring will allow you to visualize and alert on applied policies, and is critical to overall cluster observability and compliance.
In addition, you can specify the scope of your monitoring targets to either the rule, policy, or cluster level, which enables you to extract more granular insights from collected metrics.
When you install Kyverno via Helm, additional services are created inside the kyverno
Namespace which expose metrics on port 8000.
1$ values.yaml
2
3...
4metricsService:
5 create: true
6 type: ClusterIP
7 ## Kyverno's metrics server will be exposed at this port
8 port: 8000
9 ## The Node's port which will allow access Kyverno's metrics at the host level. Only used if service.type is NodePort.
10 nodePort:
11 ## Provide any additional annotations which may be required. This can be used to
12 ## set the LoadBalancer service type to internal only.
13 ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
14 ##
15 annotations: {}
16...
By default, the service type is going to be ClusterIP
meaning that metrics can only be scraped by a Prometheus server sitting inside the cluster.
In some cases, the Prometheus server may sit outside your workload cluster as a shared service. In these scenarios, you will want the kyverno-svc-metrics
Service to be publicly exposed so as to expose the metrics (available at port 8000) to your external Prometheus server.
Services can be exposed to external clients via an Ingress, or using LoadBalancer
or NodePort
Service types.
To expose your kyverno-svc-metrics
service publicly as NodePort
at host’s/node’s port number 8000, you can configure your values.yaml
before Helm installation as follows:
1...
2metricsService:
3 create: true
4 type: NodePort
5 ## Kyverno's metrics server will be exposed at this port
6 port: 8000
7 ## The Node's port which will allow access Kyverno's metrics at the host level. Only used if service.type is NodePort.
8 nodePort: 8000
9 ## Provide any additional annotations which may be required. This can be used to
10 ## set the LoadBalancer service type to internal only.
11 ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
12 ##
13 annotations: {}
14...
To expose the kyverno-svc-metrics
service using a LoadBalancer
type, you can configure your values.yaml
before Helm installation as follows:
1...
2metricsService:
3 create: true
4 type: LoadBalancer
5 ## Kyverno's metrics server will be exposed at this port
6 port: 8000
7 ## The Node's port which will allow access Kyverno's metrics at the host level. Only used if service.type is NodePort.
8 nodePort:
9 ## Provide any additional annotations which may be required. This can be used to
10 ## set the LoadBalancer service type to internal only.
11 ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
12 ##
13 annotations: {}
14...
While installing Kyverno via Helm, you also have the ability to configure which metrics to expose.
You can configure which Namespaces you want to include
and/or exclude
for metric exportation when configuring your Helm chart. This configuration is useful in situations where you might want to exclude the exposure of Kyverno metrics for certain Namespaces like test or experimental Namespaces. Likewise, you can include certain Namespaces if you want to monitor Kyverno-related activity for only a set of certain critical Namespaces. Exporting the right set of Namespaces (as opposed to exposing all Namespaces) can end up substantially reducing the memory footprint of Kyverno’s metrics exporter.
1...
2config:
3 metricsConfig:
4 namespaces: {
5 "include": [],
6 "exclude": []
7 }
8 # 'namespaces.include': list of namespaces to capture metrics for. Default: all namespaces included.
9 # 'namespaces.exclude': list of namespaces to NOT capture metrics for. Default: [], none of the namespaces excluded.
10...
exclude
takes precedence over include
in cases when a Namespace is provided under both.
Some metrics may generate an excess amount of data which may be undesirable in situations where this incurs additional cost. Some monitoring products and solutions have the ability to selectively disable which metrics are sent to collectors while leaving others enabled.
Disabling select metrics with DataDog OpenMetrics can be done by annotating the Kyverno Pod(s) as shown below.
1apiVersion: v1
2kind: Pod
3metadata:
4 annotations:
5 ad.datadoghq.com/kyverno.checks: |
6 {
7 "openmetrics": {
8 "init_config": {},
9 "instances": [
10 {
11 "openmetrics_endpoint": "http://%%host%%:8000/metrics",
12 "namespace": "kyverno",
13 "metrics": [
14 {"kyverno_policy_rule_info_total": "policy_rule_info"},
15 {"kyverno_admission_requests": "admission_requests"},
16 {"kyverno_policy_changes": "policy_changes"}
17 ],
18 "exclude_labels": [
19 "resource_namespace"
20 ]
21 },
22 {
23 "openmetrics_endpoint": "http://%%host%%:8000/metrics",
24 "namespace": "kyverno",
25 "metrics": [
26 {"kyverno_policy_results": "policy_results"}
27 ]
28 }
29 ]
30 }
31 }
The Kyverno Helm chart supports including additional Pod annotations in the values file as shown in the below example.
1podAnnotations:
2 # https://github.com/DataDog/integrations-core/blob/master/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example
3 # Note: To collect counter metrics with names ending in `_total`, specify the metric name without the `_total`
4 ad.datadoghq.com/kyverno.checks: |
5 {
6 "openmetrics": {
7 "init_config": {},
8 "instances": [
9 {
10 "openmetrics_endpoint": "http://%%host%%:8000/metrics",
11 "namespace": "kyverno",
12 "metrics": [
13 {"kyverno_policy_rule_info_total": "policy_rule_info"},
14 {"kyverno_admission_requests": "admission_requests"},
15 {"kyverno_policy_changes": "policy_changes"}
16 ],
17 "exclude_labels": [
18 "resource_namespace"
19 ]
20 },
21 {
22 "openmetrics_endpoint": "http://%%host%%:8000/metrics",
23 "namespace": "kyverno",
24 "metrics": [
25 {"kyverno_policy_results": "policy_results"}
26 ]
27 }
28 ]
29 }
30 }
This metric can be used to track the number of policies as well as rules present in the cluster which are currently active and even the ones which are not currently active but were created in the past.
This metric can be used to track the results associated with the rules executing as a part of incoming resource requests and even background scans. This metric can be further aggregated to track policy-level results as well.
This metric can be used to track the latencies associated with the execution/processing of the individual rules whenever they evaluate incoming resource requests or execute background scans. This metric can be further aggregated to present latencies at the policy-level.
This metric can be used to track the end-to-end latencies associated with the entire individual admission review, corresponding to the incoming resource request triggering a bunch of policies and rules.
This metric can be used to track the number of admission requests which were triggered as a part of Kyverno.
This metric can be used to track the history of all Kyverno policy-related changes such as policy creations, updates, and deletions.
This metric can be used to track the number of queries per second (QPS) from Kyverno.
A ready-to-use dashboard for Kyverno metrics.
OpenTelemetry integration in Kyverno.