How to configure audit collection for imported standard Kubernetes clusters?

Scenario Description

After a standard Kubernetes cluster is imported into the platform, you must enable Kubernetes API server audit logging on the cluster before the platform can collect audit data from that cluster.

This document applies to standard Kubernetes clusters whose control plane nodes are managed by you, such as kubeadm-based clusters. It does not apply to managed cloud Kubernetes clusters where you cannot log in to or modify control plane nodes.

Prerequisites

  • The standard Kubernetes cluster has already been imported into the platform.
  • You can log in to every control plane node in the cluster.
  • The cluster uses the standard kubeadm-style API server static Pod manifest path: /etc/kubernetes/manifests/kube-apiserver.yaml.

Procedure

  1. Create a local policy.yaml file for the audit policy.

    Set apiVersion according to the Kubernetes version:

    • Kubernetes earlier than 1.24: audit.k8s.io/v1beta1
    • Kubernetes 1.24 and later: audit.k8s.io/v1

    Use the following content:

    apiVersion: audit.k8s.io/v1
    kind: Policy
    omitStages:
      - "RequestReceived"
    rules:
      - level: None
        users:
          - system:kube-controller-manager
          - system:kube-scheduler
          - system:serviceaccount:kube-system:endpoint-controller
        verbs: ["get", "update"]
        namespaces: ["kube-system"]
        resources:
          - group: ""
            resources: ["endpoints"]
      - level: None
        nonResourceURLs:
          - /healthz*
          - /version
          - /swagger*
      - level: None
        resources:
          - group: ""
            resources: ["events"]
      - level: None
        resources:
          - group: "devops.alauda.io"
      - level: None
        verbs: ["get", "list", "watch"]
      - level: None
        namespaces:
          - kube-system
          - cpaas-system
          - alauda-system
          - istio-system
          - kube-node-lease
        resources:
          - group: "coordination.k8s.io"
            resources: ["leases"]
      - level: None
        resources:
          - group: "authorization.k8s.io"
            resources: ["subjectaccessreviews", "selfsubjectaccessreviews"]
          - group: "authentication.k8s.io"
            resources: ["tokenreviews"]
      - level: Metadata
        resources:
          - group: ""
            resources: ["secrets", "configmaps"]
      - level: RequestResponse
        resources:
          - group: ""
          - group: "aiops.alauda.io"
          - group: "apps"
          - group: "app.k8s.io"
          - group: "authentication.istio.io"
          - group: "auth.alauda.io"
          - group: "autoscaling"
          - group: "asm.alauda.io"
          - group: "clusterregistry.k8s.io"
          - group: "crd.alauda.io"
          - group: "infrastructure.alauda.io"
          - group: "monitoring.coreos.com"
          - group: "networking.istio.io"
          - group: "networking.k8s.io"
          - group: "portal.alauda.io"
          - group: "rbac.authorization.k8s.io"
          - group: "storage.k8s.io"
          - group: "tke.cloud.tencent.com"
          - group: "devopsx.alauda.io"
          - group: "core.katanomi.dev"
          - group: "deliveries.katanomi.dev"
          - group: "integrations.katanomi.dev"
          - group: "builds.katanomi.dev"
          - group: "operators.katanomi.dev"
          - group: "tekton.dev"
          - group: "operator.tekton.dev"
          - group: "eventing.knative.dev"
          - group: "flows.knative.dev"
          - group: "messaging.knative.dev"
          - group: "operator.knative.dev"
          - group: "sources.knative.dev"
          - group: "operator.devops.alauda.io"
      - level: Metadata
    TIP

    If the cluster version is earlier than 1.24, change only the apiVersion field to audit.k8s.io/v1beta1. The rest of the policy content stays the same.

  2. Upload policy.yaml to /etc/kubernetes/audit/ on every control plane node.

    WARNING
    • If the cluster has multiple control plane nodes, upload the file to every node.
    • Create the directory manually if it does not exist: /etc/kubernetes/audit/
  3. Update /etc/kubernetes/manifests/kube-apiserver.yaml on every control plane node.

    Add or update the following audit-related flags in spec.containers[].command:

    FlagRequiredDescription
    --audit-policy-fileYesMust be set to /etc/kubernetes/audit/policy.yaml.
    --audit-log-formatYesMust be set to json.
    --audit-log-pathYesMust be set to /etc/kubernetes/audit/audit.log.
    --audit-log-modeNoRecommended value: batch.
    --audit-log-maxsizeNoMaximum audit log file size in MiB. Recommended value: 200.
    --audit-log-maxbackupNoNumber of retained audit log files. Recommended value: 2.

    Example:

    - --audit-log-format=json
    - --audit-log-maxbackup=2
    - --audit-log-maxsize=200
    - --audit-log-mode=batch
    - --audit-log-path=/etc/kubernetes/audit/audit.log
    - --audit-policy-file=/etc/kubernetes/audit/policy.yaml
  4. Add the audit directory mount configuration to the same kube-apiserver.yaml file.

    Add the following item under spec.containers[].volumeMounts:

    - mountPath: /etc/kubernetes/audit
      name: k8s-audit

    Add the following item under spec.volumes:

    - hostPath:
        path: /etc/kubernetes/audit
        type: DirectoryOrCreate
      name: k8s-audit
    WARNING
    • Update the manifest on every control plane node when the cluster has multiple control plane nodes.
    • The volumeMounts[].name value must match the corresponding volumes[].name value.
    • Do not change the mount path /etc/kubernetes/audit.
  5. Save the file and verify that the configuration takes effect.

    Check whether /etc/kubernetes/audit/audit.log is generated on each control plane node. If the file exists and contains audit records, the configuration is effective.

    ls -l /etc/kubernetes/audit/audit.log
    tail -n 20 /etc/kubernetes/audit/audit.log