Skip to content

How to collect Kubernetes API audit logs with rancher-logging in RKE, RKE2, or K3s clusters

Article Number: 000021022

Environment

  • Rancher v2.9+
  • A Rancher-managed RKE, RKE2 or K3s cluster with rancher-logging installed

Situation

By default, the rancher-logging stack collects the container logs of Pods running in the cluster. However, Kubernetes API audit logs require specific configuration to be ingested.

This article details how to enable the collection of these audit logs using the rancher-logging chart's additionalLoggingSources and how to route them using a dedicated logging configuration.

Resolution

Step 1: Enable audit logging at the cluster level

Before rancher-logging can collect logs, the cluster must be configured to generate them. Follow the Rancher documentation Enabling the API Audit Log in Downstream Clusters.

Step 2: Configure the rancher-logging chart

When installing or upgrading the rancher-logging chart, you must configure kubeAudit in additionalLoggingSources, based on the cluster type:

RKE Clusters

additionalLoggingSources:
  [...]
  kubeAudit:
    auditFilename: 'audit-log.json'
    enabled: enabled
    loggingRef: "kubeauditlogging"
    fluentbit:
      loggingRef: "kubeauditlogging"
      logTag: kube-audit
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/controlplane
          value: 'true'
        - effect: NoExecute
          key: node-role.kubernetes.io/etcd
          value: 'true'
    pathPrefix: '/var/log/kube-audit'
[...]

RKE2 Clusters

additionalLoggingSources:
  [...]
  kubeAudit:
    auditFilename: 'audit.log'
    enabled: enabled
    loggingRef: "kubeauditlogging"
    fluentbit:
      loggingRef: "kubeauditlogging"
      logTag: kube-audit
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/controlplane
          value: 'true'
        - effect: NoExecute
          key: node-role.kubernetes.io/etcd
          value: 'true'
    pathPrefix: '/var/lib/rancher/rke2/server/logs'
[...]

K3s Clusters

additionalLoggingSources:
  [...]
  kubeAudit:
    auditFilename: 'audit.log'
    enabled: enabled
    loggingRef: "kubeauditlogging"
    fluentbit:
      loggingRef: "kubeauditlogging"
      logTag: kube-audit
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/controlplane
          value: 'true'
        - effect: NoExecute
          key: node-role.kubernetes.io/etcd
          value: 'true'
    pathPrefix: '/var/lib/rancher/k3s/server/logs'
[...]

Step 3: Create routing resources (ClusterFlow/ClusterOuput)

Create a ClusterFlow and ClusterOutput that reference the loggingRef: kubeauditlogging configured above.

Apply the following manifest to the cluster, adjusting the ClusterOutput spec to match your actual destination (such as Splunk, Elasticsearch, or S3):

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: rke2-kube-audit-clusterflow2
  namespace: cattle-logging-system
spec:
  globalOutputRefs:
    - file-out
  loggingRef: kubeauditlogging
---

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
  name: file-out
  namespace: cattle-logging-system
spec:
  # Example: Writing to a local file for testing. 
  # In production, replace with your logging provider (elasticsearch, splunk, etc.)
  file:
    path: /tmp/${tag}
  loggingRef: kubeauditlogging