Skip to content

Commit c96c95b

Browse files
zhillk8s-ci-robot
authored andcommitted
[incubator/anchore-engine] Adds incubator chart for Anchore Engine (helm#3293)
* Adds incubator chart for Anchore Engine * Fixes linter issues on anchore-engine values.yaml * Fixes truth values for values.yaml * Fixes for truthy values in other yaml configs for anchore-engine chart * Fix ingress spec for anchore-engine chart for easier config with helm cli * Addresses typos and some cleanup as requested in PR review * Adds more startup and config info in NOTES.txt for anchore-engine * Cleanup and make labels consistent in anchore-engine deployments * Move anchore-engine chart from incubator/ to stable/
1 parent c2f9a3a commit c96c95b

File tree

13 files changed

+844
-0
lines changed

13 files changed

+844
-0
lines changed

stable/anchore-engine/Chart.yaml

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
name: anchore-engine
2+
version: 0.1.0
3+
appVersion: 0.1.6
4+
description: Anchore container analysis and policy evaluation engine service
5+
keywords:
6+
- analysis
7+
- docker
8+
- anchore
9+
- "anchore-engine"
10+
- image
11+
- security
12+
home: https://anchore.io
13+
sources:
14+
- https://github.com/anchore/anchore-engine
15+
maintainers:
16+
- name: zhill
17+
email: zach@anchore.com
18+
engine: gotpl
19+
icon: https://anchore.com/wp-content/uploads/2016/08/anchore.png

stable/anchore-engine/README.md

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
Anchore Engine Helm Chart
2+
=========================
3+
4+
This chart deploys the Anchore Engine docker container image analysis system. Anchore Engine
5+
requires a PostgresSQL database (>=9.6) which may be handled by the chart or supplied externally,
6+
and executes in a 2-tier architecture with an api/control layer and a batch execution worker pool layer.
7+
8+
See [Anchore Engine](https://github.com/anchore/anchore-engine) for more project details.
9+
10+
11+
Chart Details
12+
-------------
13+
14+
The chart is split into three primary sections: GlobalConfig, CoreConfig, WorkerConfig. As the name implies,
15+
the GlobalConfig is for configuration values that all components require, while the Core and Worker sections are
16+
tier-specific and allow customization for each role.
17+
18+
19+
### Core Role
20+
The core services provide the apis and state management for the system. Core services must be available within the cluster
21+
for use by the workers.
22+
* Core component provides webhook calls to external services for notifications of events:
23+
* New images added
24+
* CVE changes in images
25+
* Policy evaluation state change for an image
26+
27+
28+
### Worker Role
29+
The workers download and analyze images and upload results to the core services. The workers poll the queue service and
30+
do not have their own external api.
31+
32+
33+
Installing the Chart
34+
--------------------
35+
36+
Deploying PostgreSQL as a dependency managed in the chart:
37+
38+
`helm install .`
39+
40+
41+
Using and existing/external PostgreSQL service:
42+
43+
`helm install --name <name> --set postgresql.enabled=False .`
44+
45+
46+
Configuration
47+
-------------
48+
49+
While the configuration options of Anchore Engine are extensive, the options provided by the chart are:
50+
51+
### Database
52+
53+
* External Postgres (not managed by helm)
54+
* postgresql.enabled=False
55+
* postgresql.externalEndpoint=myserver.mypostgres.com:5432
56+
* postgresql.postgresUser=username
57+
* postgresql.postgresPassword=password
58+
* postgresql.postgresDatabase=db name
59+
* globalConfig.dbConfig.ssl=True
60+
61+
62+
### Policy Sync from anchore.io
63+
anchore.io is a hosted version of anchore engine that includes a UI and policy editor. You can configure a local anchore-engine
64+
to download and keep the policy bundles in sync (policies defining how to evaluate images).
65+
Simply provide the credentials for your anchore.io account in the values.yaml or using `--set` on CLI to enable:
66+
67+
* coreConfig.policyBundleSyncEnabled=True
68+
* globalConfig.users.admin.anchoreIOCredentials.useAnonymous=False
69+
* globalConfig.users.admin.anchoreIOCredentials.user=username
70+
* globalConfig.users.admin.anchoreIOCredentials.password=password
71+
72+
73+
Adding Workers
74+
--------------
75+
76+
To set a specific number of workers once the service is running:
77+
78+
`helm upgrade --set workerConfig.replicaCount=2`
79+
80+
To launch with more than one worker you can either modify values.yaml or run with:
81+
82+
`helm install --set workerConfig.replicaCount=2 <chart location>`
83+
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
dependencies:
2+
- name: postgresql
3+
version: "*"
4+
repository: "alias:stable"
5+
condition: postgresql.enabled
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
To use Anchore Engine you need the URL, username, and password to access the API.
2+
3+
Anchore Engine can be accessed via port {{ .Values.service.ports.api }} on the following DNS name from within the cluster:
4+
{{ template "fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
5+
6+
Here are the steps to configure the anchore-cli (`pip install anchorecli`). Use these same values for direct API access as well.
7+
8+
To configure your anchore-cli run:
9+
10+
ANCHORE_CLI_USER=admin
11+
ANCHORE_CLI_PASS=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "fullname" . }} -o jsonpath="{.data.adminPassword}" | base64 --decode; echo)
12+
{{ if .Values.ingress.enabled }}
13+
ANCHORE_CLI_URL=http://$(kubectl get ingress --namespace {{ .Release.Namespace }} {{ template "fullname" . }} -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
14+
{{ else }}
15+
Using the service endpoint from within the cluster you can use:
16+
ANCHORE_CLI_URL=http://{{ template "fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.ports.api}}/v1/
17+
{{ end }}
18+
19+
To verify the service is up and running, you can run container for the Anchore Engine CLI:
20+
21+
kubectl run -i --tty anchore-cli --restart=Never --image anchore/engine-cli --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=${ANCHORE_CLI_PASS} --env ANCHORE_CLI_URL=http://{{ template "fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.ports.api}}/v1/
22+
23+
from within the container you can use 'anchore-cli' commands.
24+
25+
* NOTE: On first startup of anchore-engine, it performs a CVE data sync which may take several minutes to complete. During this time the system status will report 'partially_down' and any images added for analysis will stay in the 'not_analyzed' state.
26+
Once the sync is complete, any queued images will be analyzed and the system status will change to 'all_up'.
27+
28+
Initial setup time can be >60sec for postgresql setup and readiness checks to pass for the services as indicated by pod state. You can check with:
29+
kubectl get pods -l app={{ template "fullname" .}},component=core
30+
31+
32+
A quick primer on using the Anchore Engine CLI follows. For more info see: https://github.com/anchore/anchore-engine/wiki/Getting-Started
33+
34+
View system status:
35+
36+
anchore-cli system status
37+
38+
Add an image to be analyzed:
39+
40+
anchore-cli image add <imageref>
41+
42+
List images and see the analysis status (not_analyzed initially):
43+
44+
anchore-cli image list
45+
46+
Once the image is analyzed you'll see status change to 'analyzed'. This may take some time on first execution with a new database because
47+
the system must first do a CVE data sync which can take several minutes. Once complete, the image will transition to 'analyzing' state.
48+
49+
When the image reaches 'analyzed' state, you can view policy evaluation output with:
50+
51+
anchore-cli evaluate check <imageref>
52+
53+
List CVEs found in the image with:
54+
55+
anchore-cli image vuln <imageref> os
56+
57+
List OS packages found in the image with:
58+
anchore-cli image content <imageref> os
59+
60+
List files found in the image with:
61+
anchore-cli image content <imageref> files
62+
63+
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
{{/* vim: set filetype=mustache: */}}
2+
{{/*
3+
Expand the name of the chart.
4+
*/}}
5+
{{- define "name" -}}
6+
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
7+
{{- end -}}
8+
9+
{{/*
10+
Create a default fully qualified app name.
11+
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
12+
*/}}
13+
{{- define "fullname" -}}
14+
{{- $name := default .Chart.Name .Values.nameOverride -}}
15+
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
16+
{{- end -}}
17+
18+
{{/*
19+
Create a default fully qualified app name.
20+
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
21+
*/}}
22+
{{- define "worker.fullname" -}}
23+
{{- $name := default .Chart.Name .Values.nameOverride -}}
24+
{{- printf "%s-%s-%s" .Release.Name $name "worker"| trunc 63 | trimSuffix "-" -}}
25+
{{- end -}}
26+
27+
{{/*
28+
Create a default fully qualified app name.
29+
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
30+
*/}}
31+
{{- define "core.fullname" -}}
32+
{{- $name := default .Chart.Name .Values.nameOverride -}}
33+
{{- printf "%s-%s-%s" .Release.Name $name "core"| trunc 63 | trimSuffix "-" -}}
34+
{{- end -}}
35+
36+
37+
{{/*
38+
Create a default fully qualified dependency name for the db.
39+
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
40+
*/}}
41+
{{- define "postgres.fullname" -}}
42+
{{- printf "%s-%s" .Release.Name "postgresql" | trunc 63 | trimSuffix "-" -}}
43+
{{- end -}}
Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
kind: ConfigMap
2+
apiVersion: v1
3+
metadata:
4+
name: "{{ template "core.fullname" . }}"
5+
labels:
6+
app: "{{ template "fullname" . }}"
7+
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
8+
release: "{{ .Release.Name }}"
9+
heritage: "{{ .Release.Service }}"
10+
component: core
11+
data:
12+
config.yaml: |
13+
# Anchore Service Configuration File from ConfigMap
14+
service_dir: {{ default "/config" .Values.globalConfig.configDir }}
15+
tmp_dir: "/tmp"
16+
17+
allow_awsecr_iam_auto: {{ .Values.globalConfig.allowECRUseIAMRole }}
18+
cleanup_images: {{ .Values.globalConfig.cleanupImages }}
19+
20+
# docker_conn: 'unix://var/run/docker.sock'
21+
# docker_conn_timeout: 600
22+
23+
log_level: {{ .Values.coreConfig.logLevel }}
24+
host_id: ${ANCHORE_HOST_ID}
25+
internal_ssl_verify: {{ .Values.globalConfig.internalServicesSslVerifyCerts }}
26+
27+
# Uncomment if you have a local endpoint that can accept
28+
# notifications from the anchore-engine, as configured below
29+
#
30+
{{ if .Values.coreConfig.webhooks.enabled }}
31+
webhooks:
32+
{{ toYaml .Values.coreConfig.webhooks.config | indent 6 }}
33+
{{ end }}
34+
35+
# A feeds section is available for override, but shouldn't be
36+
# needed. By default, the 'admin' credentials are used if present,
37+
# otherwise anonymous access for feed sync is used
38+
39+
#feeds:
40+
# selective_sync:
41+
# # If enabled only sync specific feeds instead of all.
42+
# enabled: True
43+
# feeds:
44+
# vulnerabilities: True
45+
# # Warning: enabling the package sync causes the service to require much
46+
# # more memory to do process the significant data volume. We recommend at least 4GB available for the container
47+
# packages: False
48+
# anonymous_user_username: anon@ancho.re
49+
# anonymous_user_password: pbiU2RYZ2XrmYQ
50+
# url: 'https://ancho.re/v1/service/feeds'
51+
# client_url: 'https://ancho.re/v1/account/users'
52+
# token_url: 'https://ancho.re/oauth/token'
53+
# connection_timeout_seconds: 3
54+
# read_timeout_seconds: 60
55+
56+
credentials:
57+
users:
58+
admin:
59+
password: ${ANCHORE_ADMIN_PASSWORD}
60+
email: {{ .Values.globalConfig.users.admin.email }}
61+
external_service_auths:
62+
{{ if not .Values.globalConfig.users.admin.anchoreIOCredentials.useAnonymous }}
63+
anchoreio:
64+
anchorecli:
65+
auth: "${ANCHORE_IO_USER}:${ANCHORE_IO_PASSWORD}"
66+
{{ end }}
67+
auto_policy_sync: {{ .Values.coreConfig.policyBundleSyncEnabled }}
68+
69+
database:
70+
{{ if .Values.postgresql.enabled }}
71+
db_connect: 'postgresql+pg8000://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@{{ template "postgres.fullname" . }}:5432/{{ .Values.postgresql.postgresDatabase }}'
72+
{{ else }}
73+
db_connect: 'postgresql+pg8000://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@{{ .Values.postgresql.externalEndpoint}}/{{ .Values.postgresql.postgresDatabase }}'
74+
{{ end }}
75+
db_connect_args:
76+
timeout: 120
77+
ssl: {{ .Values.postgresql.sslEnabled }}
78+
db_pool_size: {{ .Values.globalConfig.dbConfig.connectionPoolSize }}
79+
db_pool_max_overflow: {{ .Values.globalConfig.dbConfig.connectionPoolSize }}
80+
services:
81+
apiext:
82+
enabled: True
83+
require_auth: True
84+
endpoint_hostname: {{ template "fullname" . }}
85+
listen: '0.0.0.0'
86+
port: {{ .Values.service.ports.api }}
87+
ssl_enable: {{ .Values.globalConfig.internalServicesSslEnabled }}
88+
ssl_cert: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretCertName }}
89+
ssl_key: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretKeyName }}
90+
kubernetes_webhook:
91+
enabled: True
92+
require_auth: False
93+
endpoint_hostname: {{ template "fullname" . }}
94+
listen: '0.0.0.0'
95+
port: {{ .Values.service.ports.k8sImagePolicyWebhook }}
96+
ssl_enable: {{ .Values.globalConfig.internalServicesSslEnabled }}
97+
ssl_cert: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretCertName }}
98+
ssl_key: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretKeyName }}
99+
catalog:
100+
enabled: True
101+
require_auth: True
102+
endpoint_hostname: {{ template "fullname" . }}
103+
listen: '0.0.0.0'
104+
port: {{ .Values.service.ports.catalog }}
105+
use_db: True
106+
cycle_timer_seconds: '1'
107+
cycle_timers:
108+
{{ toYaml .Values.globalConfig.cycleTimers | indent 10 }}
109+
ssl_enable: {{ .Values.globalConfig.internalServicesSslEnabled }}
110+
ssl_cert: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretCertName }}
111+
ssl_key: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretKeyName }}
112+
simplequeue:
113+
enabled: True
114+
require_auth: True
115+
endpoint_hostname: {{ template "fullname" . }}
116+
listen: '0.0.0.0'
117+
port: {{ .Values.service.ports.queue }}
118+
ssl_enable: {{ .Values.globalConfig.internalServicesSslEnabled }}
119+
ssl_cert: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretCertName }}
120+
ssl_key: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretKeyName }}
121+
analyzer:
122+
enabled: False
123+
policy_engine:
124+
enabled: True
125+
require_auth: True
126+
endpoint_hostname: {{ template "fullname" . }}
127+
listen: '0.0.0.0'
128+
port: {{ .Values.service.ports.policy }}
129+
ssl_cert: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretCertName }}
130+
ssl_key: {{ .Values.coreConfig.ssl.certDir -}}/{{- .Values.coreConfig.ssl.certSecretKeyName }}
131+
ssl_enable: {{ .Values.globalConfig.internalServicesSslEnabled }}

0 commit comments

Comments
 (0)