Generate Resources

Create additional resources based on resource creation or updates.

A generate rule can be used to create additional resources when a new resource is created or when the source is updated. This is useful to create supporting resources, such as new RoleBindings or NetworkPolicies for a Namespace.

The generate rule supports match and exclude blocks, like other rules. Hence, the trigger for applying this rule can be the creation of any resource. It is also possible to match or exclude API requests based on subjects, roles, etc.

To keep resources synchronized across changes, you can use the synchronize property. When synchronize is set to true, the generated resource is kept in-sync with the source resource (which can be defined as part of the policy or may be an existing resource), and generated resources cannot be modified by users. If synchronize is set to false then users can update or delete the generated resource directly.

When using a generate rule, the origin resource can be either an existing resource in the cluster, or a new resource defined in the rule itself. When the origin resource is a pre-existing resource such as a ConfigMap or Secret, for example, the clone object is used. See the Clone Source section for more details. When the origin resource is a new resource defined within the manifest of the rule, the data object is used. See the Data Source section for more details. These are mutually exclusive and only one may be specified per rule.

Kubernetes has many default resource types even before considering Custom Resources defined in CustomResourceDefinitions (CRDs). While Kyverno can generate these Custom Resources as well, both these as well as certain default Kubernetes resources may require granting additional privileges to Kyverno. To enable Kyverno to generate these other types, see the section on customizing permissions.

Kyverno will create an intermediate object called a UpdateRequest which is used to queue work items for the final resource generation. To get the details and status of a generated resource, check the details of the UpdateRequest. The following will give the list of UpdateRequests.

1kubectl get updaterequests -A

A UpdateRequest status can have one of four values:

  • Completed: the UpdateRequest controller created resources defined in the policy
  • Failed: the UpdateRequest controller failed to process the rules
  • Pending: the request is yet to be processed or the resource has not been created
  • Skip: marked when triggering the generate policy by adding a label/annotation to the existing resource, while the selector is not defined in the policy itself.

Data Source

The resource definition of a generated resource may be defined in the Kyverno policy/rule directly. To do this, define the generate.data object to store the contents of the resource to be created. Variable templating is supported for all fields in the data object. With synchronization enabled, later modification of the contents of that data object will cause Kyverno to update all downstream (generated) resources with the changes. Define the field spec.generateExistingOnPolicyUpdate with a value of true as shown below in an example. This field is also required when invoking generation for existing resources.

The following table shows the behavior of deletion and modification events on components of a generate rule with a data source declaration. “Downstream” refers to the generated resource(s). “Trigger” refers to the resource responsible for triggering the generate rule as defined in a combination of match and exclude blocks. Note that when using a data source with sync enabled, deletion of the rule/policy responsible for a resource’s generation will cause immediate deletion of any/all downstream resources.

ActionSync EffectNoSync Effect
Delete DownstreamDownstream recreatedDownstream deleted
Delete Rule/PolicyDownstream deletedDownstream retained
Delete TriggerNoneNone
Modify DownstreamDownstream revertedDownstream modified
Modify Rule/PolicyDownstream syncedDownstream unmodified
Modify TriggerNoneNone

Data Examples

This policy sets the Zookeeper and Kafka connection strings for all Namespaces based upon a ConfigMap defined within the rule itself.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: zk-kafka-address
 5spec:
 6  generateExistingOnPolicyUpdate: true
 7  rules:
 8  - name: k-kafka-address
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Namespace
14    exclude:
15      any:
16      - resources:
17          namespaces:
18          - kube-system
19          - default
20          - kube-public
21          - kyverno
22    generate:
23      synchronize: true
24      apiVersion: v1
25      kind: ConfigMap
26      name: zk-kafka-address
27      # generate the resource in the new namespace
28      namespace: "{{request.object.metadata.name}}"
29      data:
30        kind: ConfigMap
31        metadata:
32          labels:
33            somekey: somevalue
34        data:
35          ZK_ADDRESS: "192.168.10.10:2181,192.168.10.11:2181,192.168.10.12:2181"
36          KAFKA_ADDRESS: "192.168.10.13:9092,192.168.10.14:9092,192.168.10.15:9092"

In this example, new Namespaces will receive a NetworkPolicy that denies all inbound and outbound traffic. Similar to the first example, the generate.data object is used to define, as an overlay pattern, the spec for the NetworkPolicy resource.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: default
 5spec:
 6  generateExistingOnPolicyUpdate: true
 7  rules:
 8  - name: deny-all-traffic
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Namespace
14    exclude:
15      any:
16      - resources:
17          namespaces:
18          - kube-system
19          - default
20          - kube-public
21          - kyverno
22    generate:
23      kind: NetworkPolicy
24      apiVersion: networking.k8s.io/v1
25      name: deny-all-traffic
26      namespace: "{{request.object.metadata.name}}"
27      data:  
28        spec:
29          # select all pods in the namespace
30          podSelector: {}
31          policyTypes:
32          - Ingress
33          - Egress

For other examples of generate rules, see the policy library.

Clone Source

When a generate policy should take the source from a resource which already exists in the cluster, a clone object is used instead of a data object. When triggered, the generate policy will clone from the resource name and location defined in the rule to create the new resource. Use of the clone object implies no modification during the path from source to destination and Kyverno is not able to modify its contents (aside from metadata used for processing and tracking).

The following table shows the behavior of deletion and modification events on components of a generate rule with a clone source declaration. “Downstream” refers to the generated resource(s). “Trigger” refers to the resource responsible for triggering the generate rule as defined in a combination of match and exclude blocks. “Source” refers to the clone source. Note that when using a clone source with sync enabled, deletion of the rule/policy responsible for a resource’s generation or deletion of the clone source will NOT cause deletion of any downstream resources. This behavior differs when compared to data declarations.

ActionSync EffectNoSync Effect
Delete DownstreamDownstream recreatedDownstream deleted
Delete Rule/PolicyDownstream retainedDownstream retained
Delete SourceDownstream retainedDownstream retained
Delete TriggerNoneNone
Modify DownstreamDownstream revertedDownstream modified
Modify Rule/PolicyDownstream unmodifiedDownstream unmodified
Modify SourceDownstream syncedDownstream unmodified
Modify TriggerNoneNone

Clone Examples

In this policy, designed to clone and keep downstream Secrets in-sync with the source, the source of the data is an existing Secret resource named regcred which is stored in the default Namespace. Notice how the generate rule here instead uses the generate.clone object when the origin data exists within Kubernetes. With synchronization enabled, any modifications to the regcred source Secret in the default Namespace will cause all downstream generated resources to be updated.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: sync-secrets
 5spec:
 6  rules:
 7  - name: sync-image-pull-secret
 8    match:
 9      any:
10      - resources:
11          kinds:
12          - Namespace
13    generate:
14      apiVersion: v1
15      kind: Secret
16      name: regcred
17      namespace: "{{request.object.metadata.name}}"
18      synchronize: true
19      clone:
20        namespace: default
21        name: regcred

For other examples of generate rules, see the policy library.

Cloning Multiple Resources

Kyverno has the ability to clone multiple resources in a single rule definition for use cases where several resources must be cloned from a source Namespace to a destination Namespace. By using the generate.cloneList object, multiple kinds from the same Namespace may be specified. Use of an optional selector can scope down the source of the clones to only those having the matching label(s). The below policy clones Secrets and ConfigMaps from the staging Namespace which carry the label allowedToBeCloned="true".

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: sync-secret-with-multi-clone
 5spec:
 6  rules:
 7  - name: sync-secret
 8    match:
 9      any:
10      - resources:
11          kinds:
12          - Namespace
13    exclude:
14      any:
15      - resources:
16          namespaces:
17          - kube-system
18          - default
19          - kube-public
20          - kyverno
21    generate:
22      namespace: "{{request.object.metadata.name}}"
23      synchronize: true
24      cloneList:
25        namespace: staging
26        kinds:
27          - v1/Secret
28          - v1/ConfigMap
29        selector:
30          matchLabels:
31            allowedToBeCloned: "true"

Generating Bindings

In order for Kyverno to generate a new RoleBinding or ClusterRoleBinding resource, its ServiceAccount must first be bound to the same Role or ClusterRole which you’re attempting to generate. If this is not done, Kubernetes blocks the request because it sees a possible privilege escalation attempt from the Kyverno ServiceAccount. This is not a Kyverno function but rather how Kubernetes RBAC is designed to work.

For example, if you wish to write a generate rule which creates a new RoleBinding resource granting some user the admin role over a new Namespace, the Kyverno ServiceAccount must have a ClusterRoleBinding in place for that same admin role.

Create a new ClusterRoleBinding for the Kyverno ServiceAccount by default called kyverno.

 1apiVersion: rbac.authorization.k8s.io/v1
 2kind: ClusterRoleBinding
 3metadata:
 4  name: kyverno:generate-admin
 5roleRef:
 6  apiGroup: rbac.authorization.k8s.io
 7  kind: ClusterRole
 8  name: admin
 9subjects:
10- kind: ServiceAccount
11  name: kyverno
12  namespace: kyverno

Now, create a generate rule as you normally would which assigns a test user named steven to the admin ClusterRole for a new Namespace. The built-in ClusterRole named admin in this rule must match the ClusterRole granted to the Kyverno ServiceAccount in the previous ClusterRoleBinding.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: steven-rolebinding
 5spec:
 6  rules:
 7  - name: steven-rolebinding
 8    match:
 9      any:
10      - resources:
11          kinds:
12          - Namespace
13    generate:
14      kind: RoleBinding
15      apiVersion: rbac.authorization.k8s.io/v1
16      name: steven-rolebinding
17      namespace: "{{request.object.metadata.name}}"
18      data:  
19        subjects:
20        - kind: User
21          name: steven
22          apiGroup: rbac.authorization.k8s.io
23        roleRef:
24          kind: ClusterRole
25          name: admin
26          apiGroup: rbac.authorization.k8s.io

When a new Namespace is created, Kyverno will generate a new RoleBinding called steven-rolebinding which grants the user steven the admin ClusterRole over said new Namespace.

Linking resources with ownerReferences

In some cases, a triggering (source) resource and generated (downstream) resource need to share the same lifecycle. That is, when the triggering resource is deleted so too should the generated resource. This is valuable because some resources are only needed in the presence of another, for example a Service of type LoadBalancer necessitating the need for a specific network policy in some CNI plug-ins. While Kyverno will not take care of this task internally, Kubernetes can by setting the ownerReferences field in the generated resource. With the below example, when the generated ConfigMap specifies the metadata.ownerReferences[] object and defines the following fields including uid, which references the triggering Service resource, an owner-dependent relationship is formed. Later, if the Service is deleted, the ConfigMap will be as well. See the Kubernetes documentation for more details including an important caveat around the scoping of these references. Specifically, Namespaced resources cannot be the owners of cluster-scoped resources, and cross-namespace references are also disallowed.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: demo-ownerref
 5spec:
 6  background: false
 7  rules:
 8  - name: demo-ownerref-svc-cm
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Service
14    generate:
15      kind: ConfigMap
16      apiVersion: v1
17      name: "{{request.object.metadata.name}}-gen-cm"
18      namespace: "{{request.namespace}}"
19      synchronize: false
20      data:
21        metadata:
22          ownerReferences:
23          - apiVersion: v1
24            kind: Service
25            name: "{{request.object.metadata.name}}"
26            uid: "{{request.object.metadata.uid}}"
27        data:
28          foo: bar

Generate for Existing resources

Use of a generate rule is common when creating net new resources from the point after which the policy was created. For example, a Kyverno generate policy is created so that all future Namespaces can receive a standard set of Kubernetes resources. However, it is also possible to generate resources based on existing resources. This can be extremely useful especially for Namespaces when deploying Kyverno to an existing cluster in use where you wish policy to apply retroactively.

Kyverno supports generation for existing resources. Generate existing policies are applied in the background which creates target resources based on the match statement within the policy. They may also optionally be configured to apply upon updates to the policy itself. By defining the spec.generateExistingOnPolicyUpdate set to true, a generate rule will take effect for existing resources which have the same match characteristics.

Generate Existing Examples

By default, policy will not be applied on existing trigger resources when it is installed. This behavior can be configured via generateExistingOnPolicyUpdate attribute. Only if you set generateExistingOnPolicyUpdate to true will Kyverno generate the target resource in existing triggers on policy CREATE and UPDATE events.

In this example policy, which triggers based on the resource kind Namespace a new NetworkPolicy will be generated in all new or existing Namespaces.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: generate-resources
 5spec:
 6  generateExistingOnPolicyUpdate: true
 7  rules:
 8  - name: generate-existing-networkpolicy
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Namespace
14    generate:
15      kind: NetworkPolicy
16      apiVersion: networking.k8s.io/v1
17      name: default-deny
18      namespace: "{{request.object.metadata.name}}"
19      synchronize: true
20      data:
21        metadata:
22          labels:
23            created-by: kyverno
24        spec:
25          podSelector: {}
26          policyTypes:
27          - Ingress
28          - Egress

Similarly, this ClusterPolicy will create a PodDisruptionBudget resource for existing or new Deployments.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: create-default-pdb
 5spec:
 6  generateExistingOnPolicyUpdate: true
 7  rules:
 8  - name: create-default-pdb
 9    match:
10      any:
11      - resources:
12          kinds:
13          - Deployment
14    exclude:
15      resources:
16        namespaces:
17        - local-path-storage
18    generate:
19      apiVersion: policy/v1
20      kind: PodDisruptionBudget
21      name: "{{request.object.metadata.name}}-default-pdb"
22      namespace: "{{request.object.metadata.namespace}}"
23      synchronize: true
24      data:
25        spec:
26          minAvailable: 1
27          selector:
28            matchLabels:
29              "{{request.object.metadata.labels}}"

Troubleshooting

To troubleshoot policy application failures, inspect the UpdateRequest Custom Resource to get details.

For example, if the corresponding permission is not granted to Kyverno, you may see this error in the STATUS column:

 1$ kubectl get ur -n kyverno
 2NAME       POLICY               RULETYPE   RESOURCEKIND   RESOURCENAME           RESOURCENAMESPACE   STATUS   AGE
 3ur-7gtbx   create-default-pdb   generate   Deployment     nginx-deployment       test                Failed   2s
 4
 5$ kubectl describe ur ur-7gtbx -n kyverno
 6Name:         ur-7gtbx
 7Namespace:    kyverno
 8...
 9
10status:
11  message: 'poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kyverno:kyverno-service-account"
12            cannot create resource "poddisruptionbudgets" in API group "policy" in the namespace "test"'
13 state: Failed
Last modified January 17, 2023 at 11:46 AM PST: 1.9 documentation updates (#733) (702f6d2)