docs: redirect operator docs to trivy-operator repo (#2372)
@@ -1,107 +0,0 @@
|
||||
# Built-in Configuration Audit Policies
|
||||
|
||||
The following sections list built-in configuration audit policies installed with trivy-operator. They are stored in the
|
||||
`trivy-operator-policies-config` ConfigMap created in the installation namespace (e.g. `trivy-system`). You can modify
|
||||
them or add a new policy. For example, follow the [Writing Custom Configuration Audit Policies] tutorial to add a custom
|
||||
policy that checks for recommended Kubernetes labels on any resource kind.
|
||||
|
||||
## General
|
||||
|
||||
| NAME | DESCRIPTION | KINDS |
|
||||
|--------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
|
||||
| [CPU not limited] | Enforcing CPU limits prevents DoS via resource exhaustion. | Workload |
|
||||
| [CPU requests not specified] | When containers have resource requests specified, the scheduler can make better decisions about which nodes to place pods on, and how to deal with resource contention. | Workload |
|
||||
| [SYS_ADMIN capability added] | SYS_ADMIN gives the processes running inside the container privileges that are equivalent to root. | Workload |
|
||||
| [Default capabilities not dropped] | The container should drop all default capabilities and add only those that are needed for its execution. | Workload |
|
||||
| [Root file system is not read-only] | An immutable root file system prevents applications from writing to their local disk. This can limit intrusions, as attackers will not be able to tamper with the file system or write foreign executables to disk. | Workload |
|
||||
| [Memory not limited] | Enforcing memory limits prevents DoS via resource exhaustion. | Workload |
|
||||
| [Memory requests not specified] | When containers have memory requests specified, the scheduler can make better decisions about which nodes to place pods on, and how to deal with resource contention. | Workload |
|
||||
| [hostPath volume mounted with docker.sock] | Mounting docker.sock from the host can give the container full root access to the host. | Workload |
|
||||
| [Runs with low group ID] | Force the container to run with group ID > 10000 to avoid conflicts with the host’s user table. | Workload |
|
||||
| [Runs with low user ID] | Force the container to run with user ID > 10000 to avoid conflicts with the host’s user table. | Workload |
|
||||
| [Tiller Is Deployed] | Check if Helm Tiller component is deployed. | Workload |
|
||||
| [Image tag ':latest' used] | It is best to avoid using the ':latest' image tag when deploying containers in production. Doing so makes it hard to track which version of the image is running, and hard to roll back the version. | Workload |
|
||||
|
||||
## Advanced
|
||||
|
||||
| NAME | DESCRIPTION | KINDS |
|
||||
|----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|---------------|
|
||||
| [Unused capabilities should be dropped (drop any)] | Security best practices require containers to run with minimal required capabilities. | Workload |
|
||||
| [hostAliases is set] | Managing /etc/hosts aliases can prevent the container engine from modifying the file after a pod’s containers have already been started. | Workload |
|
||||
| [User Pods should not be placed in kube-system namespace] | ensure that User pods are not placed in kube-system namespace | Workload |
|
||||
| [Protecting Pod service account tokens] | ensure that Pod specifications disable the secret token being mounted by setting automountServiceAccountToken: false | Workload |
|
||||
| [Selector usage in network policies] | ensure that network policies selectors are applied to pods or namespaces to restricted ingress and egress traffic within the pod network | NetworkPolicy |
|
||||
| [limit range usage] | ensure limit range policy has configure in order to limit resource usage for namespaces or nodes | LimitRange |
|
||||
| [resource quota usage] | ensure resource quota policy has configure in order to limit aggregate resource usage within namespace | ResourceQuota |
|
||||
| [All container images must start with the *.azurecr.io domain] | Containers should only use images from trusted registries. | Workload |
|
||||
| [All container images must start with a GCR domain] | Containers should only use images from trusted GCR registries. | Workload |
|
||||
|
||||
## Pod Security Standard
|
||||
|
||||
### Baseline
|
||||
|
||||
| NAME | DESCRIPTION | KINDS |
|
||||
|------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
|
||||
| [Access to host IPC namespace] | Sharing the host’s IPC namespace allows container processes to communicate with processes on the host. | Workload |
|
||||
| [Access to host network] | Sharing the host’s network namespace permits processes in the pod to communicate with processes bound to the host’s loopback adapter. | Workload |
|
||||
| [Access to host PID] | Sharing the host’s PID namespace allows visibility on host processes, potentially leaking information such as environment variables and configuration. | Workload |
|
||||
| [Privileged container] | Privileged containers share namespaces with the host system and do not offer any security. They should be used exclusively for system containers that require high privileges. | Workload |
|
||||
| [Non-default capabilities added] | Adding NET_RAW or capabilities beyond the default set must be disallowed. | Workload |
|
||||
| [hostPath volumes mounted] | HostPath volumes must be forbidden. | Workload |
|
||||
| [Access to host ports] | HostPorts should be disallowed, or at minimum restricted to a known list. | Workload |
|
||||
| [Default AppArmor profile not set] | A program inside the container can bypass AppArmor protection policies. | Workload |
|
||||
| [SELinux custom options set] | Setting a custom SELinux user or role option should be forbidden. | Workload |
|
||||
| [Non-default /proc masks set] | The default /proc masks are set up to reduce attack surface, and should be required. | Workload |
|
||||
| [Unsafe sysctl options set] | Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed 'safe' subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node. | Workload |
|
||||
|
||||
### Restricted
|
||||
|
||||
| NAME | DESCRIPTION | KINDS |
|
||||
|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|----------|
|
||||
| [Non-ephemeral volume types used] | In addition to restricting HostPath volumes, usage of non-ephemeral volume types should be limited to those defined through PersistentVolumes. | Workload |
|
||||
| [Process can elevate its own privileges] | A program inside the container can elevate its own privileges and run as root, which might give the program control over the container and node. | Workload |
|
||||
| [Runs as root user] | 'runAsNonRoot' forces the running image to run as a non-root user to ensure least privileges. | Workload |
|
||||
| [A root primary or supplementary GID set] | Containers should be forbidden from running with a root primary or supplementary GID. | Workload |
|
||||
| [Default Seccomp profile not set] | The RuntimeDefault seccomp profile must be required, or allow specific additional profiles. | Workload |
|
||||
|
||||
|
||||
[CPU not limited]: https://avd.aquasec.com/misconfig/kubernetes/ksv011/
|
||||
[CPU requests not specified]: https://avd.aquasec.com/misconfig/kubernetes/ksv015/
|
||||
[SYS_ADMIN capability added]: https://avd.aquasec.com/misconfig/kubernetes/ksv005/
|
||||
[Default capabilities not dropped]: https://avd.aquasec.com/misconfig/kubernetes/ksv003/
|
||||
[Root file system is not read-only]: https://avd.aquasec.com/misconfig/kubernetes/ksv014/
|
||||
[Memory not limited]: https://avd.aquasec.com/misconfig/kubernetes/ksv018/
|
||||
[Memory requests not specified]: https://avd.aquasec.com/misconfig/kubernetes/ksv016/
|
||||
[hostPath volume mounted with docker.sock]: https://avd.aquasec.com/misconfig/kubernetes/ksv006/
|
||||
[Runs with low group ID]: https://avd.aquasec.com/misconfig/kubernetes/ksv021/
|
||||
[Runs with low user ID]: https://avd.aquasec.com/misconfig/kubernetes/ksv020/
|
||||
[Tiller Is Deployed]: https://avd.aquasec.com/misconfig/kubernetes/ksv102/
|
||||
[Image tag ':latest' used]: https://avd.aquasec.com/misconfig/kubernetes/ksv013/
|
||||
|
||||
[Unused capabilities should be dropped (drop any)]: https://avd.aquasec.com/misconfig/kubernetes/ksv004/
|
||||
[hostAliases is set]: https://avd.aquasec.com/misconfig/kubernetes/ksv007/
|
||||
[User Pods should not be placed in kube-system namespace]: https://avd.aquasec.com/misconfig/kubernetes/ksv037/
|
||||
[Protecting Pod service account tokens]: https://avd.aquasec.com/misconfig/kubernetes/ksv036/
|
||||
[Selector usage in network policies]: https://avd.aquasec.com/misconfig/kubernetes/ksv038/
|
||||
[limit range usage]: https://avd.aquasec.com/misconfig/kubernetes/ksv039/
|
||||
[resource quota usage]: https://avd.aquasec.com/misconfig/kubernetes/ksv040/
|
||||
[All container images must start with the *.azurecr.io domain]: https://avd.aquasec.com/misconfig/kubernetes/ksv032/
|
||||
[All container images must start with a GCR domain]: https://avd.aquasec.com/misconfig/kubernetes/ksv033/
|
||||
|
||||
[Access to host IPC namespace]: https://avd.aquasec.com/misconfig/kubernetes/ksv008/
|
||||
[Access to host network]: https://avd.aquasec.com/misconfig/kubernetes/ksv009/
|
||||
[Access to host PID]: https://avd.aquasec.com/misconfig/kubernetes/ksv010/
|
||||
[Privileged container]: https://avd.aquasec.com/misconfig/kubernetes/ksv017/
|
||||
[Non-default capabilities added]: https://avd.aquasec.com/misconfig/kubernetes/ksv022/
|
||||
[hostPath volumes mounted]: https://avd.aquasec.com/misconfig/kubernetes/ksv023/
|
||||
[Access to host ports]: https://avd.aquasec.com/misconfig/kubernetes/ksv024/
|
||||
[Default AppArmor profile not set]: https://avd.aquasec.com/misconfig/kubernetes/ksv002/
|
||||
[SELinux custom options set]: https://avd.aquasec.com/misconfig/kubernetes/ksv025/
|
||||
[Non-default /proc masks set]: https://avd.aquasec.com/misconfig/kubernetes/ksv027/
|
||||
[Unsafe sysctl options set]: https://avd.aquasec.com/misconfig/kubernetes/ksv026/
|
||||
|
||||
[Non-ephemeral volume types used]: https://avd.aquasec.com/misconfig/kubernetes/ksv028/
|
||||
[Process can elevate its own privileges]: https://avd.aquasec.com/misconfig/kubernetes/ksv001/
|
||||
[Runs as root user]: https://avd.aquasec.com/misconfig/kubernetes/ksv012/
|
||||
[A root primary or supplementary GID set]: https://avd.aquasec.com/misconfig/kubernetes/ksv029/
|
||||
[Default Seccomp profile not set]: https://avd.aquasec.com/misconfig/kubernetes/ksv030/
|
||||
@@ -1,18 +0,0 @@
|
||||
# Configuration Auditing
|
||||
|
||||
As your organization deploys containerized workloads in Kubernetes environments, you will be faced with many
|
||||
configuration choices related to images, containers, control plane, and data plane. Setting these configurations
|
||||
improperly creates a high-impact security and compliance risk. DevOps, and platform owners need the ability to
|
||||
continuously assess build artifacts, workloads, and infrastructure against configuration hardening standards to
|
||||
remediate any violations.
|
||||
|
||||
trivy-operator configuration audit capabilities are purpose-built for Kubernetes environments. In particular, trivy
|
||||
Operator continuously checks images, workloads, and Kubernetes infrastructure components against common configurations
|
||||
security standards and generates detailed assessment reports, which are then stored in the default Kubernetes database.
|
||||
|
||||
Kubernetes applications and other core configuration objects, such as Ingress, NetworkPolicy and ResourceQuota resources, are evaluated against [Built-in Policies].
|
||||
Additionally, application and infrastructure owners can integrate these reports into incident response workflows for
|
||||
active remediation.
|
||||
|
||||
[Built-in Policies]: ./built-in-policies.md
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
# Configuration
|
||||
|
||||
You can configure Trivy-Operator to control it's behavior and adapt it to your needs. Aspects of the operator machinery are configured using environment variables on the operator Pod, while aspects of the scanning behavior are controlled by ConfigMaps and Secrets.
|
||||
|
||||
# Operator Configuration
|
||||
|
||||
| NAME| DEFAULT| DESCRIPTION|
|
||||
|---|---|---|
|
||||
| `OPERATOR_NAMESPACE`| N/A| See [Install modes](#install-modes)|
|
||||
| `OPERATOR_TARGET_NAMESPACES`| N/A| See [Install modes](#install-modes)|
|
||||
| `OPERATOR_EXCLUDE_NAMESPACES`| N/A| A comma separated list of namespaces (or glob patterns) to be excluded from scanning in all namespaces [Install mode](#install-modes).|
|
||||
| `OPERATOR_SERVICE_ACCOUNT`| `trivy-operator`| The name of the service account assigned to the operator's pod|
|
||||
| `OPERATOR_LOG_DEV_MODE`| `false`| The flag to use (or not use) development mode (more human-readable output, extra stack traces and logging information, etc).|
|
||||
| `OPERATOR_SCAN_JOB_TIMEOUT`| `5m`| The length of time to wait before giving up on a scan job|
|
||||
| `OPERATOR_CONCURRENT_SCAN_JOBS_LIMIT`| `10`| The maximum number of scan jobs create by the operator|
|
||||
| `OPERATOR_SCAN_JOB_RETRY_AFTER`| `30s`| The duration to wait before retrying a failed scan job|
|
||||
| `OPERATOR_BATCH_DELETE_LIMIT`| `10`| The maximum number of config audit reports deleted by the operator when the plugin's config has changed.|
|
||||
| `OPERATOR_BATCH_DELETE_DELAY`| `10s`| The duration to wait before deleting another batch of config audit reports.|
|
||||
| `OPERATOR_METRICS_BIND_ADDRESS`| `:8080`| The TCP address to bind to for serving [Prometheus][prometheus] metrics. It can be set to `0` to disable the metrics serving.|
|
||||
| `OPERATOR_HEALTH_PROBE_BIND_ADDRESS`| `:9090`| The TCP address to bind to for serving health probes, i.e. `/healthz/` and `/readyz/` endpoints.|
|
||||
| `OPERATOR_VULNERABILITY_SCANNER_ENABLED`| `true`| The flag to enable vulnerability scanner|
|
||||
| `OPERATOR_CONFIG_AUDIT_SCANNER_ENABLED`| `false`| The flag to enable configuration audit scanner|
|
||||
| `OPERATOR_CONFIG_AUDIT_SCANNER_SCAN_ONLY_CURRENT_REVISIONS`| `false`| The flag to enable config audit scanner to only scan the current revision of a deployment|
|
||||
| `OPERATOR_CONFIG_AUDIT_SCANNER_BUILTIN`| `true`| The flag to enable built-in configuration audit scanner|
|
||||
| `OPERATOR_VULNERABILITY_SCANNER_SCAN_ONLY_CURRENT_REVISIONS`| `false`| The flag to enable vulnerability scanner to only scan the current revision of a deployment|
|
||||
| `OPERATOR_VULNERABILITY_SCANNER_REPORT_TTL`| `""`| The flag to set how long a vulnerability report should exist. When a old report is deleted a new one will be created by the controller. It can be set to `""` to disabled the TTL for vulnerability scanner. |
|
||||
| `OPERATOR_LEADER_ELECTION_ENABLED`| `false`| The flag to enable operator replica leader election|
|
||||
| `OPERATOR_LEADER_ELECTION_ID`| `trivy-operator-lock`| The name of the resource lock for leader election|
|
||||
|
||||
The values of the `OPERATOR_NAMESPACE` and `OPERATOR_TARGET_NAMESPACES` determine the install mode, which in turn determines the multitenancy support of the operator.
|
||||
|
||||
| MODE| OPERATOR_NAMESPACE | OPERATOR_TARGET_NAMESPACES | DESCRIPTION|
|
||||
|---|---|---|---|
|
||||
| OwnNamespace| `operators`| `operators`| The operator can be configured to watch events in the namespace it is deployed in. |
|
||||
| SingleNamespace| `operators`| `foo`| The operator can be configured to watch for events in a single namespace that the operator is not deployed in. |
|
||||
| MultiNamespace| `operators`| `foo,bar,baz`| The operator can be configured to watch for events in more than one namespace. |
|
||||
| AllNamespaces| `operators`| (blank string)| The operator can be configured to watch for events in all namespaces.|
|
||||
|
||||
## Example - configure namespaces to scan
|
||||
|
||||
To change the target namespace from all namespaces to the `default` namespace edit the `trivy-operator` Deployment and change the value of the `OPERATOR_TARGET_NAMESPACES` environment variable from the blank string (`""`) to the `default` value.
|
||||
|
||||
# Scanning configuration
|
||||
|
||||
| CONFIGMAP KEY| DEFAULT| DESCRIPTION|
|
||||
|---|---|---|
|
||||
| `vulnerabilityReports.scanner`| `Trivy`| The name of the plugin that generates vulnerability reports. Either `Trivy` or `Aqua`.|
|
||||
| `vulnerabilityReports.scanJobsInSameNamespace` | `"false"`| Whether to run vulnerability scan jobs in same namespace of workload. Set `"true"` to enable.|
|
||||
| `scanJob.tolerations`| N/A| JSON representation of the [tolerations] to be applied to the scanner pods so that they can run on nodes with matching taints. Example: `'[{"key":"key1", "operator":"Equal", "value":"value1", "effect":"NoSchedule"}]'`|
|
||||
| `scanJob.annotations`| N/A| One-line comma-separated representation of the annotations which the user wants the scanner pods to be annotated with. Example: `foo=bar,env=stage` will annotate the scanner pods with the annotations `foo: bar` and `env: stage` |
|
||||
| `scanJob.templateLabel`| N/A| One-line comma-separated representation of the template labels which the user wants the scanner pods to be labeled with. Example: `foo=bar,env=stage` will labeled the scanner pods with the labels `foo: bar` and `env: stage`|
|
||||
|
||||
## Example - patch ConfigMap
|
||||
|
||||
By default Trivy displays vulnerabilities with all severity levels (`UNKNOWN`, `LOW`, `MEDIUM`, `HIGH`, `CRITICAL`). To display only `HIGH` and `CRITICAL` vulnerabilities by patching the `trivy.severity` value in the `trivy-operator-trivy-config` ConfigMap:
|
||||
|
||||
```bash
|
||||
kubectl patch cm trivy-operator-trivy-config -n trivy-operator \
|
||||
--type merge \
|
||||
-p "$(cat <<EOF
|
||||
{
|
||||
"data": {
|
||||
"trivy.severity": "HIGH,CRITICAL"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
## Example - patch Secret
|
||||
|
||||
To set the GitHub token used by Trivy scanner add the `trivy.githubToken` value to the `trivy-operator-trivy-config` Secret:
|
||||
|
||||
```bash
|
||||
kubectl patch secret trivy-operator-trivy-config -n trivy-operator \
|
||||
--type merge \
|
||||
-p "$(cat <<EOF
|
||||
{
|
||||
"data": {
|
||||
"trivy.githubToken": "$(echo -n <your token> | base64)"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
## Example - delete a key
|
||||
|
||||
The following `kubectl patch` command deletes the `trivy.httpProxy` key:
|
||||
|
||||
```bash
|
||||
kubectl patch cm trivy-operator-trivy-config -n trivy-operator \
|
||||
--type json \
|
||||
-p '[{"op": "remove", "path": "/data/trivy.httpProxy"}]'
|
||||
```
|
||||
|
||||
[tolerations]: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration
|
||||
|
||||
|
||||
[prometheus]: https://github.com/prometheus
|
||||
@@ -1,195 +0,0 @@
|
||||
# Getting Started
|
||||
|
||||
## Before you Begin
|
||||
|
||||
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your
|
||||
cluster. If you do not already have a cluster, you can create one by installing [minikube], [kind] or [microk8s], or you can use the following [Kubernetes playground].
|
||||
|
||||
You also need the Trivy-Operator to be installed in the `trivy-system` namespace, e.g. with
|
||||
[kubectl](./installation/kubectl.md) or [Helm](./installation/helm.md). Let's also assume that the operator is
|
||||
configured to discover built-in Kubernetes resources in all namespaces, except `kube-system` and `trivy-system`.
|
||||
|
||||
## Workloads Scanning
|
||||
|
||||
Let's create the `nginx` Deployment that we know is vulnerable:
|
||||
|
||||
```
|
||||
kubectl create deployment nginx --image nginx:1.16
|
||||
```
|
||||
|
||||
When the `nginx` Deployment is created, the operator immediately detects its current revision (aka active ReplicaSet)
|
||||
and scans the `nginx:1.16` image for vulnerabilities. It also audits the ReplicaSet's specification for common pitfalls
|
||||
such as running the `nginx` container as root.
|
||||
|
||||
If everything goes fine, the operator saves scan reports as VulnerabilityReport and ConfigAuditReport resources in the
|
||||
`default` namespace. Reports are named after the scanned ReplicaSet. For image vulnerability scans, the operator creates
|
||||
a VulnerabilityReport for each different container. In this example there is just one container image called `nginx`:
|
||||
|
||||
```
|
||||
kubectl get vulnerabilityreports -o wide
|
||||
```
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```
|
||||
NAME REPOSITORY TAG SCANNER AGE CRITICAL HIGH MEDIUM LOW UNKNOWN
|
||||
replicaset-nginx-78449c65d4-nginx library/nginx 1.16 Trivy 85s 33 62 49 114 1
|
||||
```
|
||||
</details>
|
||||
|
||||
```
|
||||
kubectl get configauditreports -o wide
|
||||
```
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```
|
||||
NAME SCANNER AGE CRITICAL HIGH MEDIUM LOW
|
||||
replicaset-nginx-78449c65d4 Trivy-Operator 2m7s 0 0 6 7
|
||||
```
|
||||
</details>
|
||||
|
||||
Notice that scan reports generated by the operator are controlled by Kubernetes workloads. In our example,
|
||||
VulnerabilityReport and ConfigAuditReport resources are controlled by the active ReplicaSet of the `nginx` Deployment:
|
||||
|
||||
```console
|
||||
kubectl tree deploy nginx
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY REASON AGE
|
||||
default Deployment/nginx - 7h2m
|
||||
default └─ReplicaSet/nginx-78449c65d4 - 7h2m
|
||||
default ├─ConfigAuditReport/replicaset-nginx-78449c65d4 - 2m31s
|
||||
default ├─Pod/nginx-78449c65d4-5wvdx True 7h2m
|
||||
default └─VulnerabilityReport/replicaset-nginx-78449c65d4-nginx - 2m7s
|
||||
```
|
||||
</details>
|
||||
|
||||
!!! note
|
||||
The [tree] command is a kubectl plugin to browse Kubernetes object hierarchies as a tree.
|
||||
|
||||
Moving forward, let's update the container image of the `nginx` Deployment from `nginx:1.16` to `nginx:1.17`. This will
|
||||
trigger a rolling update of the Deployment and eventually create another ReplicaSet.
|
||||
|
||||
```
|
||||
kubectl set image deployment nginx nginx=nginx:1.17
|
||||
```
|
||||
|
||||
Even this time the operator will pick up changes and rescan our Deployment with updated configuration:
|
||||
|
||||
```
|
||||
kubectl tree deploy nginx
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY REASON AGE
|
||||
default Deployment/nginx - 7h5m
|
||||
default ├─ReplicaSet/nginx-5fbc65fff - 2m36s
|
||||
default │ ├─ConfigAuditReport/replicaset-nginx-5fbc65fff - 2m36s
|
||||
default │ ├─Pod/nginx-5fbc65fff-j7zl2 True 2m36s
|
||||
default │ └─VulnerabilityReport/replicaset-nginx-5fbc65fff-nginx - 2m22s
|
||||
default └─ReplicaSet/nginx-78449c65d4 - 7h5m
|
||||
default ├─ConfigAuditReport/replicaset-nginx-78449c65d4 - 5m46s
|
||||
default └─VulnerabilityReport/replicaset-nginx-78449c65d4-nginx - 5m22s
|
||||
```
|
||||
</details>
|
||||
|
||||
By following this guide you could realize that the operator knows how to attach VulnerabilityReport and
|
||||
ConfigAuditReport resources to build-in Kubernetes objects. What's more, in this approach where a custom resource
|
||||
inherits a life cycle of the built-in resource we could leverage Kubernetes garbage collection. For example, when the
|
||||
previous ReplicaSet named `nginx-78449c65d4` is deleted the VulnerabilityReport named `replicaset-nginx-78449c65d4-nginx`
|
||||
as well as the ConfigAuditReport named `replicaset-nginx-78449c65d46` are automatically garbage collected.
|
||||
|
||||
!!! tip
|
||||
If you only want the latest ReplicaSet in your Deployment to be scanned for vulnerabilities, you can set the value
|
||||
of the `OPERATOR_VULNERABILITY_SCANNER_SCAN_ONLY_CURRENT_REVISIONS` environment variable to `true` in the operator's
|
||||
deployment descriptor. This is useful to identify vulnerabilities that impact only the running workloads.
|
||||
|
||||
!!! tip
|
||||
If you only want the latest ReplicaSet in your Deployment to be scanned for config audit, you can set the value
|
||||
of the `OPERATOR_CONFIG_AUDIT_SCANNER_SCAN_ONLY_CURRENT_REVISIONS` environment variable to `true` in the operator's
|
||||
deployment descriptor. This is useful to identify config issues that impact only the running workloads.
|
||||
|
||||
!!! tip
|
||||
You can get and describe `vulnerabilityreports` and `configauditreports` as built-in Kubernetes objects:
|
||||
```
|
||||
kubectl get vulnerabilityreport replicaset-nginx-5fbc65fff-nginx -o json
|
||||
kubectl describe configauditreport replicaset-nginx-5fbc65fff
|
||||
```
|
||||
|
||||
Notice that scaling up the `nginx` Deployment will not schedule new scans because all replica Pods refer to the same Pod
|
||||
template defined by the `nginx-5fbc65fff` ReplicaSet.
|
||||
|
||||
```
|
||||
kubectl scale deploy nginx --replicas 3
|
||||
```
|
||||
|
||||
```
|
||||
kubectl tree deploy nginx
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY REASON AGE
|
||||
default Deployment/nginx - 7h6m
|
||||
default ├─ReplicaSet/nginx-5fbc65fff - 4m7s
|
||||
default │ ├─ConfigAuditReport/replicaset-nginx-5fbc65fff - 4m7s
|
||||
default │ ├─Pod/nginx-5fbc65fff-458n7 True 8s
|
||||
default │ ├─Pod/nginx-5fbc65fff-fk847 True 8s
|
||||
default │ ├─Pod/nginx-5fbc65fff-j7zl2 True 4m7s
|
||||
default │ └─VulnerabilityReport/replicaset-nginx-5fbc65fff-nginx - 3m53s
|
||||
default └─ReplicaSet/nginx-78449c65d4 - 7h6m
|
||||
default ├─ConfigAuditReport/replicaset-nginx-78449c65d4 - 7m17s
|
||||
default └─VulnerabilityReport/replicaset-nginx-78449c65d4-nginx - 6m53s
|
||||
```
|
||||
</details>
|
||||
|
||||
Finally, when you delete the `nginx` Deployment, orphaned security reports will be deleted in the background by the
|
||||
Kubernetes garbage collection controller.
|
||||
|
||||
```
|
||||
kubectl delete deploy nginx
|
||||
```
|
||||
|
||||
```console
|
||||
kubectl get vuln,configaudit
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Result</summary>
|
||||
|
||||
```
|
||||
No resources found in default namespace.
|
||||
```
|
||||
</details>
|
||||
|
||||
!!! Tip
|
||||
Use `vuln` and `configaudit` as short names for `vulnerabilityreports` and `configauditreports` resources.
|
||||
|
||||
!!! Note
|
||||
You can define the validity period for VulnerabilityReports by setting the duration as the value of the
|
||||
`OPERATOR_VULNERABILITY_SCANNER_REPORT_TTL` environment variable. For example, setting the value to `24h`
|
||||
would delete reports after 24 hours. When a VulnerabilityReport gets deleted Trivy-Operator will automatically
|
||||
|
||||
|
||||
|
||||
## What's Next?
|
||||
|
||||
- Find out how the operator scans workloads that use container images from [Private Registries].
|
||||
- By default, the operator uses Trivy as [Vulnerability Scanner] and Polaris as [Configuration Checker], but you can
|
||||
choose other tools that are integrated with Trivy-Operator or even implement you own plugin.
|
||||
|
||||
[minikube]: https://minikube.sigs.k8s.io/docs/
|
||||
[kind]: https://kind.sigs.k8s.io/docs/
|
||||
[microk8s]: https://microk8s.io/
|
||||
[Kubernetes playground]: http://labs.play-with-k8s.com/
|
||||
[tree]: https://github.com/ahmetb/kubectl-tree
|
||||
|
Before Width: | Height: | Size: 1.6 MiB |
|
Before Width: | Height: | Size: 1.6 MiB |
|
Before Width: | Height: | Size: 361 KiB |
|
Before Width: | Height: | Size: 325 KiB |
|
Before Width: | Height: | Size: 368 KiB |
|
Before Width: | Height: | Size: 249 KiB |
|
Before Width: | Height: | Size: 529 KiB |
@@ -1 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 22 22"><path d="M19.90856,11.37359l-.94046,1.16318.04433.42088a.66075.66075,0,0,1,.00653.25385l-.00778.04071a.66193.66193,0,0,1-.08906.21314c-.01313.01986-.027.03932-.0384.0537l-4.57928,5.69351a.70189.70189,0,0,1-.53066.25266l-7.34439-.00171a.70458.70458,0,0,1-.52974-.25154L1.32209,13.51754a.64957.64957,0,0,1-.096-.16658.71032.71032,0,0,1-.02863-.08952.66205.66205,0,0,1-.00515-.30511l1.6348-7.10077a.66883.66883,0,0,1,.1355-.274.65915.65915,0,0,1,.22568-.17666L9.80881,2.24386a.69063.69063,0,0,1,.29475-.0667l.00515.0002.03424.00112a.68668.68668,0,0,1,.25649.06544l6.61569,3.161a.66765.66765,0,0,1,.21678.165.675.675,0,0,1,.14909.29139l.60521,2.64815,1.1606-.20569-.61853-2.70614a1.85372,1.85372,0,0,0-1.00544-1.25474l-6.616-3.16113a1.84812,1.84812,0,0,0-.67883-.17726l-.03061-.00218c-.02692-.00125-.05416-.00152-.05851-.00152L10.10146,1a1.87317,1.87317,0,0,0-.80022.18175l-6.62038,3.161a1.83083,1.83083,0,0,0-.62572.48916,1.84956,1.84956,0,0,0-.37523.75964L.04518,12.69226a1.84474,1.84474,0,0,0,.00956.8516,1.88289,1.88289,0,0,0,.07772.24244,1.826,1.826,0,0,0,.27219.46878L4.98281,19.9503a1.8815,1.8815,0,0,0,1.4473.6903l7.34394.00172a1.87874,1.87874,0,0,0,1.4475-.69182l4.58278-5.698c.03609-.04578.07026-.093.10252-.14243a1.82018,1.82018,0,0,0,.25207-.59695c.00805-.03517.01484-.07079.021-.10773a1.8273,1.8273,0,0,0-.02032-.71135Z" style="fill:#fff"/><polygon points="9.436 4.863 9.332 11.183 12.92 10.115 9.436 4.863" style="fill:#fff"/><polygon points="7.913 11.605 8.265 11.5 8.617 11.395 8.629 11.392 8.74 4.605 8.753 3.838 8.384 4.915 8.015 5.994 5.964 11.986 6.684 11.971 7.913 11.605" style="fill:#fff"/><polygon points="5.738 13.279 5.888 12.956 6.014 12.685 5.723 12.691 5.352 12.699 5.06 12.705 1.918 12.771 4.498 15.952 5.588 13.603 5.738 13.279" style="fill:#fff"/><polygon points="14.026 10.516 13.675 10.621 13.324 10.725 9.32 11.917 8.969 12.021 8.617 12.126 8.604 12.13 8.252 12.235 7.9 12.339 7.593 12.431 7.894 12.688 8.238 12.982 8.583 13.277 8.598 13.289 8.943 13.584 9.288 13.879 9.61 14.154 9.896 14.398 10.183 14.643 14.064 17.958 22 8.143 14.026 10.516" style="fill:#fff"/><polygon points="9.273 14.787 9.229 14.749 8.943 14.505 8.928 14.492 8.583 14.197 8.567 14.183 8.222 13.889 7.877 13.594 7.362 13.154 7.086 12.919 6.81 12.683 6.794 12.669 6.641 12.998 6.488 13.328 6.468 13.371 6.318 13.694 6.168 14.017 4.989 16.557 4.989 16.558 4.99 16.558 4.992 16.559 5.341 16.638 5.691 16.716 12.164 18.175 12.895 18.339 13.625 18.504 9.516 14.994 9.273 14.787" style="fill:#fff"/></svg>
|
||||
|
Before Width: | Height: | Size: 2.5 KiB |
|
Before Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 75 KiB |
|
Before Width: | Height: | Size: 125 KiB |
|
Before Width: | Height: | Size: 56 KiB |
@@ -1,8 +1,10 @@
|
||||
# Trivy Operator
|
||||
# Trivy Operator
|
||||
|
||||
Trivy has a native [Kubernetes Operator](operator) which continuously scans your Kubernetes cluster for security issues, and generates security reports as Kubernetes [Custom Resources](crd). It does it by watching Kubernetes for state changes and automatically triggering scans in response to changes, for example initiating a vulnerability scan when a new Pod is created.
|
||||
|
||||
> Trivy Operator is based on existing Aqua OSS project - [Starboard], and shares some of the design, principles and code with it. Existing content that relates to Starboard Operator might also be relevant for Trivy Operator. To learn more about the transition from Starboard from Trivy, see the [announcement discussion](starboard-announcement).
|
||||
|
||||
> Kubernetes-native security toolkit. ([Documentation](https://aquasecurity.github.io/trivy-operator/latest)).
|
||||
|
||||
|
||||
<figure>
|
||||
<img src="./images/operator/trivy-operator-workloads.png" />
|
||||
|
||||
@@ -1,90 +0,0 @@
|
||||
# Helm
|
||||
|
||||
[Helm], which is a popular package manager for Kubernetes, allows installing applications from parameterized
|
||||
YAML manifests called Helm [charts].
|
||||
|
||||
The Helm chart is available on GitHub in [https://github.com/aquasecurity/trivy-operator](https://github.com/aquasecurity/trivy-operator) under `/deploy/helm` and is also hosted in a Chart repository for your convenience under [https://aquasecurity.github.io/helm-charts/](https://aquasecurity.github.io/helm-charts/).
|
||||
|
||||
## Example - Chart repository
|
||||
|
||||
This will install the operator in the `trivy-system` namespace and configure it to scan all namespaces, except `kube-system` and `trivy-system`:
|
||||
|
||||
```bash
|
||||
helm repo add aqua https://aquasecurity.github.io/helm-charts/
|
||||
helm repo update
|
||||
helm install trivy-operator aqua/trivy-operator \
|
||||
--namespace trivy-system \
|
||||
--create-namespace \
|
||||
--set="trivy.ignoreUnfixed=true" \
|
||||
--version {{ var.operator_version }}
|
||||
```
|
||||
|
||||
## Example - Download the chart
|
||||
|
||||
This will install the operator in the `trivy-system` namespace and configure it to scan all namespaces, except `kube-system` and `trivy-system`:
|
||||
|
||||
```bash
|
||||
git clone --depth 1 --branch {{ var.operator_version }} https://github.com/aquasecurity/trivy-operator.git
|
||||
cd trivy-operator
|
||||
helm install trivy-operator ./deploy/helm \
|
||||
--namespace trivy-system \
|
||||
--create-namespace \
|
||||
--set="trivy.ignoreUnfixed=true"
|
||||
```
|
||||
|
||||
## Post install sanity check
|
||||
|
||||
Check that the `trivy-operator` Helm release is created in the `trivy-system` namespace, and it has status `deployed`:
|
||||
|
||||
```console
|
||||
$ helm list -n trivy-system
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
trivy-operator trivy-system 1 2021-01-27 20:09:53.158961 +0100 CET deployed trivy-operator-{{ var.operator_version }} {{ var.operator_version[1:] }}
|
||||
```
|
||||
|
||||
To confirm that the operator is running, check that the `trivy-operator` Deployment in the `trivy-system`
|
||||
namespace is available and all its containers are ready:
|
||||
|
||||
```console
|
||||
$ kubectl get deployment -n trivy-system
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
trivy-operator 1/1 1 1 11m
|
||||
```
|
||||
|
||||
If for some reason it's not ready yet, check the logs of the Deployment for errors:
|
||||
|
||||
```
|
||||
kubectl logs deployment/trivy-operator -n trivy-system
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
The Helm chart supports all available [installation modes](./../configuration.md#install-modes) of Trivy Operator.
|
||||
|
||||
Please refer to the chart's [values] file for configuration options.
|
||||
|
||||
## Uninstall
|
||||
|
||||
You can uninstall the operator with the following command:
|
||||
|
||||
```
|
||||
helm uninstall trivy-operator -n trivy-system
|
||||
```
|
||||
|
||||
You have to manually delete custom resource definitions created by the `helm install` command:
|
||||
|
||||
!!! danger
|
||||
Deleting custom resource definitions will also delete all security reports generated by the operator.
|
||||
|
||||
```
|
||||
kubectl delete crd vulnerabilityreports.aquasecurity.github.io
|
||||
kubectl delete crd clustervulnerabilityreports.aquasecurity.github.io
|
||||
kubectl delete crd configauditreports.aquasecurity.github.io
|
||||
kubectl delete crd clusterconfigauditreports.aquasecurity.github.io
|
||||
kubectl delete crd clustercompliancereports.aquasecurity.github.io
|
||||
kubectl delete crd clustercompliancedetailreports.aquasecurity.github.io
|
||||
```
|
||||
|
||||
[Helm]: https://helm.sh/
|
||||
[charts]: https://helm.sh/docs/topics/charts/
|
||||
[values]: https://raw.githubusercontent.com/aquasecurity/trivy-operator/{{ var.operator_version }}/deploy/helm/values.yaml
|
||||
@@ -1,45 +0,0 @@
|
||||
# kubectl
|
||||
|
||||
Kubernetes Yaml deployment files are available on GitHub in [https://github.com/aquasecurity/trivy-operator](https://github.com/aquasecurity/trivy-operator) under `/deploy/static`.
|
||||
|
||||
## Example - Deploy from GitHub
|
||||
|
||||
This will install the operator in the `trivy-system` namespace and configure it to scan all namespaces, except `kube-system` and `trivy-system`:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/trivy-operator/{{ var.operator_version }}/deploy/static/trivy-operator.yaml
|
||||
```
|
||||
|
||||
To confirm that the operator is running, check that the `trivy-operator` Deployment in the `trivy-system`
|
||||
namespace is available and all its containers are ready:
|
||||
|
||||
```bash
|
||||
$ kubectl get deployment -n trivy-system
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
trivy-operator 1/1 1 1 11m
|
||||
```
|
||||
|
||||
If for some reason it's not ready yet, check the logs of the `trivy-operator` Deployment for errors:
|
||||
|
||||
```bash
|
||||
kubectl logs deployment/trivy-operator -n trivy-system
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
You can configure Trivy-Operator to control it's behavior and adapt it to your needs. Aspects of the operator machinery are configured using environment variables on the operator Pod, while aspects of the scanning behavior are controlled by ConfigMaps and Secrets.
|
||||
To learn more, please refer to the [Configuration](config) documentation.
|
||||
|
||||
## Uninstall
|
||||
|
||||
!!! danger
|
||||
Uninstalling the operator and deleting custom resource definitions will also delete all generated security reports.
|
||||
|
||||
You can uninstall the operator with the following command:
|
||||
|
||||
```
|
||||
kubectl delete -f https://raw.githubusercontent.com/aquasecurity/trivy-operator/{{ var.operator_version }}/deploy/static/trivy-operator.yaml
|
||||
```
|
||||
|
||||
[Settings]: ./../../settings.md
|
||||
[Helm]: ./helm.md
|
||||
@@ -1,10 +0,0 @@
|
||||
# Upgrade
|
||||
|
||||
We recommend that you upgrade Trivy Operator often to stay up to date with the latest fixes and enhancements.
|
||||
|
||||
However, at this stage we do not provide automated upgrades. Therefore, uninstall the previous version of the operator
|
||||
before you install the latest release.
|
||||
|
||||
!!! warning
|
||||
Consult release notes and changelog to revisit and migrate configuration settings which may not be compatible
|
||||
between different versions.
|
||||
@@ -1,106 +0,0 @@
|
||||
# Troubleshooting the Trivy Operator
|
||||
|
||||
The Trivy Operator installs several Kubernetes resources into your Kubernetes cluster.
|
||||
|
||||
Here are the common steps to check whether the operator is running correctly and to troubleshoot common issues.
|
||||
|
||||
So in addition to this section, you might want to check [issues](https://github.com/aquasecurity/trivy/issues), [discussion forum](https://github.com/aquasecurity/trivy/discussions), or [Slack](https://slack.aquasec.com) to see if someone from the community had similar problems before.
|
||||
|
||||
Also note that Trivy Operator is based on existing Aqua OSS project - [Starboard], and shares some of the design, principles and code with it. Existing content that relates to Starboard Operator might also be relevant for Trivy Operator, and Starboard's [issues](https://github.com/aquasecurity/starboard/issues), [discussion forum](https://github.com/aquasecurity/starboard/discussions), or [Slack](https://slack.aquasec.com) might also be interesting to check.
|
||||
In some cases you might want to refer to [Starboard's Design documents](https://aquasecurity.github.io/starboard/latest/design/)
|
||||
|
||||
## Installation
|
||||
|
||||
Make sure that the latest version of the Trivy Operator is installed. For this, have a look at the installation [options.](./installation/helm.md)
|
||||
|
||||
For instance, if your are using the Helm deployment, you need to check the Helm Chart version deployed to your cluster. You can check the Helm Chart version with the following command:
|
||||
```
|
||||
helm list -n trivy-operator
|
||||
```
|
||||
|
||||
## Operator Pod Not Running
|
||||
|
||||
The Trivy Operator will run a pod inside your cluster. If you have followed the installation guide, you will have installed the Operator to the `trivy-system`.
|
||||
|
||||
Make sure that the pod is in the `Running` status:
|
||||
```
|
||||
kubectl get pods -n trivy-system
|
||||
```
|
||||
|
||||
This is how it will look if it is running okay:
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
trivy-operator trivy-operator-6c9bd97d58-hsz4g 1/1 Running 5 (19m ago) 30h
|
||||
```
|
||||
|
||||
If the pod is in `Failed`, `Pending`, or `Unknown` check the events and the logs of the pod.
|
||||
|
||||
First, check the events, since they might be more descriptive of the problem. However, if the events do not give a clear reason why the pod cannot spin up, then you want to check the logs, which provide more detail.
|
||||
|
||||
```
|
||||
kubectl describe pod <POD-NAME> -n trivy-system
|
||||
```
|
||||
|
||||
To check the logs, use the following command:
|
||||
```
|
||||
kubectl logs deployment/trivy-operator -n trivy-system
|
||||
```
|
||||
|
||||
If your pod is not running, try to look for errors as they can give an indication on the problem.
|
||||
|
||||
If there are too many logs messages, try deleting the Trivy pod and observe its behavior upon restarting. A new pod should spin up automatically after deleting the failed pod.
|
||||
|
||||
## ImagePullBackOff or ErrImagePull
|
||||
|
||||
Check the status of the Trivy Operator pod running inside of your Kubernetes cluster. If the Status is ImagePullBackOff or ErrImagePull, it means that the Operator either
|
||||
|
||||
* tries to access the wrong image
|
||||
* cannot pull the image from the registry
|
||||
|
||||
Make sure that you are providing the right resources upon installing the Trivy Operator.
|
||||
|
||||
## CrashLoopBackOff
|
||||
|
||||
If your pod is in `CrashLoopBackOff`, it is likely the case that the pod cannot be scheduled on the Kubernetes node that it is trying to schedule on.
|
||||
In this case, you want to investigate further whether there is an issue with the node. It could for instance be the case that the node does not have sufficient resources.
|
||||
|
||||
## Reconciliation Error
|
||||
|
||||
It could happen that the pod appears to be running normally but does not reconcile the resources inside of your Kubernetes cluster.
|
||||
|
||||
Check the logs for reconciliation errors:
|
||||
```
|
||||
kubectl logs deployment/trivy-operator -n trivy-system
|
||||
```
|
||||
|
||||
If this is the case, the Trivy Operator likely does not have the right configurations to access your resource.
|
||||
|
||||
## Operator does not Create VulnerabilityReports
|
||||
|
||||
VulnerabilityReports are owned and controlled by the immediate Kubernetes workload. Every VulnerabilityReport of a pod is thus, linked to a [ReplicaSet.](./index.md) In case the Trivy Operator does not create a VulnerabilityReport for your workloads, it could be that it is not monitoring the namespace that your workloads are running on.
|
||||
|
||||
An easy way to check this is by looking for the `ClusterRoleBinding` for the Trivy Operator:
|
||||
|
||||
```
|
||||
kubectl get ClusterRoleBinding | grep "trivy-operator"
|
||||
```
|
||||
|
||||
Alternatively, you could use the `kubectl-who-can` [plugin by Aqua](https://github.com/aquasecurity/kubectl-who-can):
|
||||
|
||||
```console
|
||||
$ kubectl who-can list vulnerabilityreports
|
||||
No subjects found with permissions to list vulnerabilityreports assigned through RoleBindings
|
||||
|
||||
CLUSTERROLEBINDING SUBJECT TYPE SA-NAMESPACE
|
||||
cluster-admin system:masters Group
|
||||
trivy-operator trivy-operator ServiceAccount trivy-system
|
||||
system:controller:generic-garbage-collector generic-garbage-collector ServiceAccount kube-system
|
||||
system:controller:namespace-controller namespace-controller ServiceAccount kube-system
|
||||
system:controller:resourcequota-controller resourcequota-controller ServiceAccount kube-system
|
||||
system:kube-controller-manager system:kube-controller-manager User
|
||||
```
|
||||
|
||||
If the `ClusterRoleBinding` does not exist, Trivy currently cannot monitor any namespace outside of the `trivy-system` namespace.
|
||||
|
||||
For instance, if you are using the [Helm Chart](./installation/helm.md), you want to make sure to set the `targetNamespace` to the namespace that you want the Operator to monitor.
|
||||
@@ -1,109 +0,0 @@
|
||||
# Vulnerability Scanning Configuration
|
||||
|
||||
## Standalone
|
||||
|
||||
The default configuration settings enable Trivy `vulnerabilityReports.scanner` in [`Standalone`][trivy-standalone]
|
||||
`trivy.mode`. Even though it doesn't require any additional setup, it's the least efficient method. Each Pod created
|
||||
by a scan Job has the init container that downloads the Trivy vulnerabilities database from the GitHub releases page
|
||||
and stores it in the local file system of the [emptyDir volume]. This volume is then shared with containers that perform
|
||||
the actual scanning. Finally, the Pod is deleted along with the emptyDir volume.
|
||||
|
||||

|
||||
|
||||
The number of containers defined by a scan Job equals the number of containers defined by the scanned Kubernetes
|
||||
workload, so the cache in this mode is useful only if the workload defines multiple containers.
|
||||
|
||||
Beyond that, frequent downloads from GitHub might lead to a [rate limiting] problem. The limits are imposed by GitHub on
|
||||
all anonymous requests originating from a given IP. To mitigate such problems you can add the `trivy.githubToken` key to
|
||||
the `trivy-operator` secret.
|
||||
|
||||
```bash
|
||||
|
||||
kubectl patch secret trivy-operator-trivy-config -n trivy-operator \
|
||||
--type merge \
|
||||
-p "$(cat <<EOF
|
||||
{
|
||||
"data": {
|
||||
"trivy.githubToken": "$(echo -n <GITHUB_TOKEN> | base64)"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
## ClientServer
|
||||
|
||||
You can connect Trivy to an external Trivy server by changing the default `trivy.mode` from
|
||||
[`Standalone`][trivy-standalone] to [`ClientServer`][trivy-clientserver] and specifying `trivy.serverURL`.
|
||||
|
||||
```bash
|
||||
kubectl patch cm trivy-operator-trivy-config -n trivy-operator \
|
||||
--type merge \
|
||||
-p "$(cat <<EOF
|
||||
{
|
||||
"data": {
|
||||
"trivy.mode": "ClientServer",
|
||||
"trivy.serverURL": "<TRIVY_SERVER_URL>"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
The Trivy server could be your own deployment, or it could be an external service. See [Trivy server][trivy-clientserver] documentation for more information.
|
||||
|
||||
If the server requires access token and/or custom HTTP authentication headers, you may add `trivy.serverToken` and `trivy.serverCustomHeaders` properties to the Trivy Operator secret.
|
||||
|
||||
```bash
|
||||
kubectl patch secret trivy-operator-trivy-config -n trivy-operator \
|
||||
--type merge \
|
||||
-p "$(cat <<EOF
|
||||
{
|
||||
"data": {
|
||||
"trivy.serverToken": "$(echo -n <SERVER_TOKEN> | base64)",
|
||||
"trivy.serverCustomHeaders": "$(echo -n x-api-token:<X_API_TOKEN> | base64)"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Settings
|
||||
|
||||
| CONFIGMAP KEY| DEFAULT| DESCRIPTION|
|
||||
|---|---|---|
|
||||
| `trivy.imageRef`| `docker.io/aquasec/trivy:0.25.2`| Trivy image reference|
|
||||
| `trivy.dbRepository`| `ghcr.io/aquasecurity/trivy-db`| External OCI Registry to download the vulnerability database|
|
||||
| `trivy.mode`| `Standalone`| Trivy client mode. Either `Standalone` or `ClientServer`. Depending on the active mode other settings might be applicable or required. |
|
||||
| `trivy.severity`| `UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL` | A comma separated list of severity levels reported by Trivy|
|
||||
| `trivy.ignoreUnfixed`| N/A| Whether to show only fixed vulnerabilities in vulnerabilities reported by Trivy. Set to `"true"` to enable it.|
|
||||
| `trivy.skipFiles`| N/A| A comma separated list of file paths for Trivy to skip traversal.|
|
||||
| `trivy.skipDirs`| N/A| A comma separated list of directories for Trivy to skip traversal.|
|
||||
| `trivy.ignoreFile`| N/A| It specifies the `.trivyignore` file which contains a list of vulnerability IDs to be ignored from vulnerabilities reported by Trivy.|
|
||||
| `trivy.timeout`| `5m0s`| The duration to wait for scan completion|
|
||||
| `trivy.serverURL`| N/A| The endpoint URL of the Trivy server. Required in `ClientServer` mode.|
|
||||
| `trivy.serverTokenHeader`| `Trivy-Token`| The name of the HTTP header to send the authentication token to Trivy server. Only application in `ClientServer` mode when `trivy.serverToken` is specified.|
|
||||
| `trivy.serverInsecure`| N/A| The Flag to enable insecure connection to the Trivy server.|
|
||||
| `trivy.insecureRegistry.<id>`| N/A| The registry to which insecure connections are allowed. There can be multiple registries with different registry `<id>`.|
|
||||
| `trivy.nonSslRegistry.<id>`| N/A| A registry without SSL. There can be multiple registries with different registry `<id>`.|
|
||||
| `trivy.registry.mirror.<registry>` | N/A| Mirror for the registry `<registry>`, e.g. `trivy.registry.mirror.index.docker.io: mirror.io` would use `mirror.io` to get images originated from `index.docker.io` |
|
||||
| `trivy.httpProxy`| N/A| The HTTP proxy used by Trivy to download the vulnerabilities database from GitHub.|
|
||||
| `trivy.httpsProxy`| N/A| The HTTPS proxy used by Trivy to download the vulnerabilities database from GitHub.|
|
||||
| `trivy.noProxy`| N/A| A comma separated list of IPs and domain names that are not subject to proxy settings.|
|
||||
| `trivy.resources.requests.cpu`| `100m`| The minimum amount of CPU required to run Trivy scanner pod.|
|
||||
| `trivy.resources.requests.memory`| `100M`| The minimum amount of memory required to run Trivy scanner pod.|
|
||||
| `trivy.resources.limits.cpu`| `500m`| The maximum amount of CPU allowed to run Trivy scanner pod.|
|
||||
| `trivy.resources.limits.memory`| `500M`| The maximum amount of memory allowed to run Trivy scanner pod.|
|
||||
|
||||
| SECRET KEY| DESCRIPTION|
|
||||
|---|---|
|
||||
| `trivy.githubToken`| The GitHub access token used by Trivy to download the vulnerabilities database from GitHub. Only applicable in `Standalone` mode. |
|
||||
| `trivy.serverToken`| The token to authenticate Trivy client with Trivy server. Only applicable in `ClientServer` mode.|
|
||||
| `trivy.serverCustomHeaders`| A comma separated list of custom HTTP headers sent by Trivy client to Trivy server. Only applicable in `ClientServer` mode.|
|
||||
|
||||
[trivy-standalone]: https://aquasecurity.github.io/trivy/latest/modes/standalone/
|
||||
[emptyDir volume]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
|
||||
[rate limiting]: https://docs.github.com/en/free-pro-team@latest/rest/overview/resources-in-the-rest-api#rate-limiting
|
||||
[trivy-clientserver]: https://aquasecurity.github.io/trivy/latest/advanced/modes/client-server/
|
||||
@@ -1,29 +0,0 @@
|
||||
# Frequently Asked Questions
|
||||
|
||||
## Why do you duplicate instances of VulnerabilityReports for the same image digest?
|
||||
|
||||
Docker image reference is not a first class citizen in Kubernetes. It's a
|
||||
property of the container definition. Trivy-operator relies on label selectors to
|
||||
associate VulnerabilityReports with corresponding Kubernetes workloads, not
|
||||
particular image references. For example, we can get all reports for the
|
||||
wordpress Deployment with the following command:
|
||||
|
||||
```text
|
||||
kubectl get vulnerabilityreports \
|
||||
-l trivy-operator.resource.kind=Deployment \
|
||||
-l trivy-operator.resource.name=wordpress
|
||||
```
|
||||
|
||||
Beyond that, for each instance of the VulnerabilityReports we set the owner
|
||||
reference pointing to the corresponding pods controller. By doing that we can
|
||||
manage orphaned VulnerabilityReports and leverage Kubernetes garbage collection.
|
||||
For example, if the `wordpress` Deployment is deleted, all related
|
||||
VulnerabilityReports are automatically garbage collected.
|
||||
|
||||
## Why do you create an instance of the VulnerabilityReport for each container?
|
||||
The idea is to partition VulnerabilityReports generated for a particular
|
||||
Kubernetes workload by containers is to mitigate the risk of exceeding the etcd
|
||||
request payload limit. By default, the payload of each Kubernetes object stored
|
||||
etcd is subject to 1.5 MiB.
|
||||
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
# Vulnerability Scanners
|
||||
|
||||
Vulnerability scanning is an important way to identify and remediate security gaps in Kubernetes workloads. The
|
||||
process involves scanning container images to check all software on them and report any vulnerabilities found.
|
||||
|
||||
Trivy Operator automatically discovers and scans all images that are being used in a Kubernetes cluster, including
|
||||
images of application pods and system pods. Scan reports are saved as [VulnerabilityReport] resources, which are owned
|
||||
by a Kubernetes controller.
|
||||
|
||||
For example, when Trivy scans a Deployment, the corresponding VulnerabilityReport instance is attached to its
|
||||
current revision. In other words, the VulnerabilityReport inherits the life cycle of the Kubernetes controller. This
|
||||
also implies that when a Deployment is rolling updated, it will get scanned automatically, and a new instance of the
|
||||
VulnerabilityReport will be created and attached to the new revision. On the other hand, if the previous revision is
|
||||
deleted, the corresponding VulnerabilityReport will be deleted automatically by the Kubernetes garbage collector.
|
||||
|
||||
Trivy may scan Kubernetes workloads that run images from [Private Registries] and certain [Managed Registries].
|
||||
|
||||
[Trivy]: ./trivy.md
|
||||
[Private Registries]: ./managed-registries.md
|
||||
[Managed Registries]: ./managed-registries.md
|
||||
@@ -1,77 +0,0 @@
|
||||
## Amazon Elastic Container Registry (ECR)
|
||||
|
||||
You must create an IAM OIDC identity provider for your cluster:
|
||||
|
||||
```
|
||||
eksctl utils associate-iam-oidc-provider \
|
||||
--cluster <cluster_name> \
|
||||
--approve
|
||||
```
|
||||
|
||||
Override the existing `trivy-operator` service account and
|
||||
attach the IAM policy to grant it permission to pull images from the ECR:
|
||||
|
||||
```
|
||||
eksctl create iamserviceaccount \
|
||||
--name trivy-operator \
|
||||
--namespace trivy-operator \
|
||||
--cluster <cluster_name> \
|
||||
--attach-policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
|
||||
--approve \
|
||||
--override-existing-serviceaccounts
|
||||
```
|
||||
|
||||
## Azure Container Registry (ACR)
|
||||
|
||||
Before you can start, you need to install `aad-pod-identity` inside your cluster, see installation instructions:
|
||||
https://azure.github.io/aad-pod-identity/docs/getting-started/installation/
|
||||
|
||||
Create a managed identity and assign the permission to the ACR.
|
||||
```sh
|
||||
export IDENTITY_NAME=trivy-operator-identity
|
||||
export AZURE_RESOURCE_GROUP=<my_resource_group>
|
||||
export AZURE_LOCATION=westeurope
|
||||
export ACR_NAME=<my_azure_container_registry>
|
||||
|
||||
az identity create --name ${IDENTITY_NAME} --resource-group ${AZURE_RESOURCE_GROUP} --location ${AZURE_LOCATION}
|
||||
|
||||
export IDENTITY_ID=(az identity show --name ${IDENTITY_NAME} --resource-group ${AZURE_RESOURCE_GROUP} --query id -o tsv)
|
||||
export IDENTITY_CLIENT_ID=$(az identity show --name ${IDENTITY_NAME} --resource-group ${AZURE_RESOURCE_GROUP} --query clientId -o tsv)
|
||||
export ACR_ID=$(az acr show --name ${ACR_NAME} --query id -o tsv)
|
||||
|
||||
az role assignment create --assignee ${IDENTITY_CLIENT_ID} --role 'AcrPull' --scope ${ACR_ID}
|
||||
```
|
||||
|
||||
create an `AzureIdentity` and `AzureIdentityBinding` resource inside your kubernetes cluster:
|
||||
```yaml
|
||||
apiVersion: aadpodidentity.k8s.io/v1
|
||||
kind: AzureIdentity
|
||||
metadata:
|
||||
name: trivy-identity
|
||||
namespace: trivy-operator
|
||||
spec:
|
||||
clientID: ${IDENTITY_ID}
|
||||
resourceID: ${IDENTITY_CLIENT_ID}
|
||||
type: 0
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: aadpodidentity.k8s.io/v1
|
||||
kind: AzureIdentityBinding
|
||||
metadata:
|
||||
name: trivy-id-binding
|
||||
namespace: trivy-operator
|
||||
spec:
|
||||
azureIdentity: trivy-operator-identity
|
||||
selector: trivy-operator-label
|
||||
```
|
||||
|
||||
add `scanJob.podTemplateLabels` to the Trivy Operator config map, the value must match the `AzureIdentityBinding` selector.
|
||||
|
||||
```sh
|
||||
kubectl -n trivy-operator edit cm trivy-operator
|
||||
# Insert scanJob.podTemplateLabels: aadpodidbinding=trivy-operator-label in data block
|
||||
|
||||
# validate
|
||||
trivy-operator config --get scanJob.podTemplateLabels
|
||||
```
|
||||
19
mkdocs.yml
@@ -66,21 +66,6 @@ nav:
|
||||
- Scanning: docs/kubernetes/cli/scanning.md
|
||||
- Operator:
|
||||
- Overview: docs/kubernetes/operator/index.md
|
||||
- Installation:
|
||||
- kubectl: docs/kubernetes/operator/installation/kubectl.md
|
||||
- Helm: docs/kubernetes/operator/installation/helm.md
|
||||
- Upgrade: docs/kubernetes/operator/installation/upgrade.md
|
||||
- Getting Started: docs/kubernetes/operator/getting-started.md
|
||||
- Configuration: docs/kubernetes/operator/configuration.md
|
||||
- Vulnerability Scanning:
|
||||
- Overview: docs/kubernetes/operator/vulnerability-scanning/index.md
|
||||
- Scan Configuration: docs/kubernetes/operator/vulnerability-scanning/configuration.md
|
||||
- Managed registries: docs/kubernetes/operator/vulnerability-scanning/managed-registries.md
|
||||
- FAQ: docs/kubernetes/operator/vulnerability-scanning/faq.md
|
||||
- Configuration Auditing:
|
||||
- Overview: docs/kubernetes/operator/configuration-auditing/index.md
|
||||
- Built-in Configuration Audit Policies: docs/kubernetes/operator/configuration-auditing/built-in-policies.md
|
||||
- Troubleshooting: docs/kubernetes/operator/troubleshooting.md
|
||||
- SBOM:
|
||||
- Overview: docs/sbom/index.md
|
||||
- CycloneDX: docs/sbom/cyclonedx.md
|
||||
@@ -165,10 +150,8 @@ extra:
|
||||
version:
|
||||
method: mike
|
||||
provider: mike
|
||||
var:
|
||||
prev_git_tag: "v0.0.0"
|
||||
operator_version: "v0.0.7"
|
||||
|
||||
plugins:
|
||||
- search
|
||||
- macros
|
||||
|
||||
|
||||