Deploy Bufstream to AWS
This page walks you through installing Bufstream into your AWS deployment by setting your Helm values and installing the provided Helm chart. See the AWS configuration page for defaults and recommendations about resources, replicas, storage, and scaling.
Data from your Bufstream cluster will never leave your network or report back to Buf.
Prerequisites
To deploy Bufstream on AWS, you need the following before you start:
- A Kubernetes cluster (v1.27 or newer)
- An S3 bucket
- Helm (v3.12.0 or newer)
Deploy Bufstream
1. Authenticate helm
To get started, authenticate helm
with the Bufstream OCI registry using the keyfile that was sent alongside this
documentation. The keyfile should contain a base64 encoded string.
$ cat keyfile | helm registry login -u _json_key_base64 --password-stdin \
https://us-docker.pkg.dev/buf-images-1/bufstream
2. Create a namespace
Create a Kubernetes namespace in the k8s cluster for the bufstream
Helm chart to use:
$ kubectl create namespace bufstream
3. Configure Bufstream's Helm values
Bufstream is configured using Helm values that are passed to the bufstream
Helm chart. To configure the values:
-
Create a Helm values file named
bufstream-values.yaml
, which is required by thehelm install
command in step 5. This file can be in any location, but we recommend creating it in the same directory where you run thehelm
commands. -
Add the values from the steps below to the
bufstream-values.yaml
file. Skip to Install the Helm chart for a full example chart.
Configure object storage
Bufstream requires S3-compatible object storage.
Bufstream attempts to acquire credentials from the environment using EKS Pod Identity. To configure storage, set the following Helm values, filling in your S3 variables:
storage:
use: s3
s3:
bucket: "my-bucket-name"
region: "us-east-1"
# forcePathStyle: false # Optional, use path-style bucket URLs (http://s3.amazonaws.com/BUCKET/KEY)
# endpoint: "https://s3.us-east-1.amazonaws.com" # Optional
The k8s service account to create the pod identity association for is named bufstream-service-account
.
Alternatively, you can use an access key pair.
- Add the
accessKeyId
to the configuration:
storage:
use: s3
s3:
accessKeyId: "AKIAIOSFODNN7EXAMPLE"
secretName: bufstream-storage
bucket: "my-bucket-name"
region: "us-east-1"
# forcePathStyle: false # Optional, use path-style bucket URLs (http://s3.amazonaws.com/BUCKET/KEY)
# endpoint: "https://s3.us-east-1.amazonaws.com" # Optional
- Create a k8s secret containing the s3 access secret key:
$ kubectl create secret --namespace bufstream generic bufstream-storage \
--from-literal=secret_access_key=<s3 secret access key>
Configure etcd
Bufstream requires an etcd
cluster. To set up an example deployment of etcd
on Kubernetes, use
the Bitnami etcd
Helm chart with the following values:
$ helm install \
--namespace bufstream \
bufstream-etcd \
oci://registry-1.docker.io/bitnamicharts/etcd \
--version 10.2.4 \
-f - <<EOF
replicaCount: 3
persistence:
enabled: true
size: 10Gi
storageClass: ""
autoCompactionMode: periodic
autoCompactionRetention: 30s
removeMemberOnContainerTermination: false
resourcesPreset: none
# By default, no resource requests/limits are present on the etcd pods.
# Optionally, configure resources/limits by setting the values below:
# resources:
# requests:
# cpu: 1
# memory: 1024Mi
# limits:
# memory: 1024Mi
auth:
rbac:
create: false
enabled: false
token:
enabled: false
metrics:
useSeparateEndpoint: true
customLivenessProbe:
httpGet:
port: 9090
path: /livez
scheme: "HTTP"
initialDelaySeconds: 10
periodSeconds: 30
timeoutSeconds: 15
failureThreshold: 10
customReadinessProbe:
httpGet:
port: 9090
path: /readyz
scheme: "HTTP"
initialDelaySeconds: 20
timeoutSeconds: 10
extraEnvVars:
- name: ETCD_LISTEN_CLIENT_HTTP_URLS
value: "http://0.0.0.0:8080"
EOF
etcd
is sensitive to disk performance, so we recommend using the AWS EBS CSI Driver with gp3
or io1/io2
disks, instead of the default gp2
disks EKS uses. The storage class in the example above can be changed by setting
the persistence.storageClass
value to a custom storage class using those disks.
Then, configure Bufstream to connect to the etcd
cluster:
metadata:
use: etcd
etcd:
# etcd addresses to connect to
addresses:
- host: "bufstream-etcd.bufstream.svc.cluster.local"
port: 2379
Configure observability
The observability
block is used to configure the collection and exporting of metrics and traces from your
application, using Prometheus or OTLP:
observability:
# Optional, set the log level
# logLevel: INFO
# otlpEndpoint: "" # Optional, OTLP endpoint to send traces and metrics to
metrics:
# Optional, can be either "NONE", "STDOUT", "HTTP", "HTTPS" or "PROMETHEUS"
# When set to HTTP or HTTPS, will send OTLP metrics
# When set to PROMETHEUS, will expose prometheus metrics for scraping on port 9090 under /metrics
exporter: "NONE"
tracing:
# Optional, can be either "NONE", STDOUT", "HTTP", or "HTTPS"
# When set to HTTP or HTTPS, will send OTLP metrics
exporter: "NONE"
# Optional, trace sampling ratio, defaults to 0.1
# traceRatio: 0.1
4. Install the Helm chart
After following the steps above, the set of Helm values should be similar to the example below:
storage:
use: s3
s3:
bucket: "my-bucket-name"
region: "us-east-1"
metadata:
use: etcd
etcd:
# etcd addresses to connect to
addresses:
- host: "bufstream-etcd.bufstream.svc.cluster.local"
port: 2379
observability:
metrics:
exporter: "PROMETHEUS"
Using the bufstream-values.yaml
Helm values file, install the Helm chart for the cluster and set the correct
Bufstream version:
$ helm install bufstream oci://us-docker.pkg.dev/buf-images-1/bufstream/charts/bufstream \
--version "0.x.x" \
--namespace=bufstream \
--values bufstream-values.yaml
If you change any configurations in the bufstream-values.yaml
file, re-run the command.