The BSR is designed to run on Kubernetes, and is distributed as a Helm Chart and accompanying Docker images through an OCI registry. The Helm Chart and Docker images are versioned, and are expected to be used together. The default values in the Chart use Docker images with the same version as the Chart itself.

Please review the list of BSR Dependencies before getting started.

1. Authenticate helm

To get started, authenticate helm with the Buf OCI registry using the keyfile that was sent alongside this documentation.

cat keyfile | base64 | helm registry login -u _json_key_base64 --password-stdin \
  https://us-docker.pkg.dev/buf-images-1/bsr

2. Create a namespace

Create a Kubernetes namespace in the k8s cluster for the bsr Helm Chart to use:

kubectl create namespace bsr

3. Create a pull secret

Create a pull secret using the provided keyfile, this will be used by the cluster to pull images from the Buf OCI registry:

kubectl create secret --namespace bsr docker-registry bufpullsecret \
  --docker-server=us-docker.pkg.dev/buf-images-1/bsr \
  --docker-username=_json_key_base64 \
  --docker-password="$(cat keyfile | base64)"

4. Configure the BSR’s Helm values

The BSR is configured using Helm values through the bsr Helm Chart.

Create a file named bsr.yaml to store the Helm values, which is required by the helm install step below.

This file can be in any location, but we recommend creating it in the same directory where the helm commands are run.

Set the desired host and configure the chart to use the image pull secret (created above):

host: example.com # Hostname that the BSR will be served from
imagePullSecrets:
  - name: bufpullsecret # The image pull secret that was created above

Put the values from the steps below in the bsr.yaml file. You may skip to Install the Helm Chart for a full example Helm chart.

Configure object storage

The BSR requires S3-compatible object storage.

Instance profile (recommended)

The bufd client will attempt to acquire credentials from the environment. To configure the storage set the following Helm values, filling in your S3 variables:

storage:
  use: s3
  s3:
    bucketName: "my-bucket-name"
    endpoint: "s3.us-east-1.amazonaws.com"
    region: "us-east-1"
    # forcePathStyle: false # Optional, use path-style bucket URLs (http://s3.amazonaws.com/BUCKET/KEY)
    # insecure: false # Optional, disable TLS

Access key pair

Alternatively, you may instead use an access key pair.

Add the accessKeyId to the configuration:

storage:
  use: s3
  s3:
    accessKeyId: "AKIAIOSFODNN7EXAMPLE"
    bucketName: "my-bucket-name"
    endpoint: "s3.us-east-1.amazonaws.com"
    region: "us-east-1"
    # forcePathStyle: false # Optional, use path-style bucket URLs (http://s3.amazonaws.com/BUCKET/KEY)
    # insecure: false # Optional, disable TLS

Then create a k8s secret containing the s3 access secret key:

kubectl create secret --namespace bsr generic bufd-storage \
  --from-literal=secret_access_key=<s3 secret access key>

Create a Postgres database

The BSR requires a PostgreSQL database. The BSR postgres user requires full access to the database, and additionally must be able to create the pgcrypto and pg_trgm extensions.

To configure Postgres, set the following helm values:

postgres:
  host: "postgres.example.com"
  port: 5432
  database: postgres
  user: postgres

Then create a k8s secret containing the postgres user password:

kubectl create secret --namespace bsr generic bufd-postgres \
  --from-literal=password=<postgres password>

Configure Redis

The BSR requires a Redis instance.

To configure Redis, create a k8s secret containing the address:

kubectl create secret --namespace bsr generic bufd-redis \
  --from-literal=address=redis.example.com:6379 # Host

Optionally, authentication and TLS for Redis are also supported. These can be set with the following Helm values:

redis:
  # Set to true to enable auth for redis.
  # The auth token will be read from the "auth" field in the "bufd-redis" secret
  auth: true
  tls:
    # Whether to use TLS for connecting to Redis
    # Set to "false" to disable TLS
    # Set to "local" to use certs from the "ca" field in the "bufd-redis" secret
    # Set to "system" to use the system trust store
    use: "false"
  • If authentication is enabled, the redis auth string should be added to the bufd-redis secret in the auth field.
  • If TLS is enabled and use is set to local , the CA certificate(s) to trust should be added to the bufd-redis secret in the ca field.

Example of a secret containing both an authentication token and a CA certificate:

kubectl create secret --namespace bsr generic bufd-redis \
  --from-literal=address=redis.example.com:6379 \ # Host
  --from-literal=auth=<redis auth string> \ # Auth string
  --from-file=ca=<redis ca.crt> \ # Redis CA certificate

Configure SAML authentication

The BSR supports authentication using an external identity provider (IdP), through Security Assertion Markup Language (SAML).

In the SAML IdP, create a new application to represent the BSR. It should return a single sign-on URL and IdP metadata. Either a public URL or raw XML can be specified for the SAML config. If SAML is being configured in Okta, please follow our guide.

To configure SAML authentication in the BSR, set the following Helm values:

auth:
  method: saml
  saml:
    # Endpoint where the XML metadata is available
    idpMetadataURL: "https://example-provider.com/app/12345/sso/saml/metadata"
    # If the authentication provider does not have a metadata url,
    # the raw XML metadata can be configured using the idpRawMetadata,
    # value instead.
    idpRawMetadata: ""
  # Optional
  # A list of emails which will be granted server admin permissions on login
  # Note that this list is case-sensitive
  autoProvisionedAdminEmails:
    - "user@example.com"

Additionally, a Kubernetes TLS secret named bsr-saml-cert containing a certificate pair is required in order for SAML to function. The certificate pair may be self-signed. Given the certificate pair, create the Kubernetes secret:

kubectl create secret --namespace bsr tls bsr-saml-cert \
  --cert=path/to/cert/file \
  --key=path/to/key/file

Configure Ingress

The BSR uses a Kubernetes Ingress resource to handle incoming traffic and for terminating TLS. The domain used here must match the host set in the Helm values above.

TLS is required for the BSR to function properly. HTTP2 is preferred to allow for gRPC support.

bufd:
  ingress:
    enabled: true
    className: "" # Optional ingress class to use
    annotations: {} # Optional ingress annotations
    hosts:
      - host: example.com
        paths:
          - path: /
            portName: http
    # Optional TLS configuration for the ingress.
    # May be omitted to configure TLS termination, depending on the ingress.
    # Requires a kubernetes TLS secret.
    tls:
      - secretName: bsr-tls-cert
        hosts:
          - example.com

If the load balancer does not support H2C, TLS can optionally be used for communication between the load balancer and the BSR by enabling TLS on the listening ports of the bufd application. This requires a Kubernetes TLS secret named bsr-tls-cert.

bufd:
  tls:
    enabled: true
    # Optional. Secret name for the TLS cert
    # secretName: bsr-tls-cert
  # Optional. Used to add annotations to the ingress service.
  # May be needed for some ingress controllers to function correctly.
  service:
    annotations: {}

Configure Observability

The metrics block is used to configure the collection and exporting of metrics from your application using prometheus:

observability:
  metrics:
    use: prometheus
    runtime: true
    prometheus:
      podLabels: # This is required if enabling network policies.
        app: prometheus
      port: 9090
      path: /metrics

Trusting Additional Certificates

If you bump into issues regarding self signed certificates, such as seeing the error tls: failed to verify certificate: x509: certificate signed by unknown authority, you can add your root certificates on the BSR. To trust additional certificates, mount the files on the bufd pod and include them in the client TLS configuration.

bufd:
  deployment:
    extraVolumeMounts:
      - mountPath: /config/secrets/certificates/cert.pem
        name: certificate
        readOnly: true
        subPath: cert.pem
    extraVolumes:
      - name: certificate
        secret:
          secretName: tls-cert
          items:
            - key: cert.pem
              path: cert.pem
  clientTLS:
    extraCerts:
      - /config/secrets/certificates/cert.pem

5. Install the Helm Chart

After following the steps above, the set of Helm values should be similar to the example below:

host: example.com
imagePullSecrets:
  - name: bufpullsecret
storage:
  use: s3
  s3:
    bucketName: "my-bucket-name"
    endpoint: "s3.us-east-1.amazonaws.com"
    region: "us-east-1"
postgres:
  host: "postgres.example.com"
  port: 5432
  database: postgres
  user: postgres
auth:
  method: saml
  saml:
    idpMetadataURL: "https://example-provider.com/app/12345/sso/saml/metadata"
  autoProvisionedAdminEmails:
    - "user@example.com"
bufd:
  ingress:
    enabled: true
    hosts:
      - host: example.com
        paths:
          - path: /
            portName: http
    tls:
      - secretName: bsr-tls-cert
        hosts:
          - example.com
    extraVolumeMounts:
      - mountPath: /config/secrets/certificates/cert.pem
        name: certificate
        readOnly: true
        subPath: cert.pem
    extraVolumes:
      - name: certificate
        secret:
          secretName: tls-cert
          items:
            - key: cert.pem
              path: cert.pem
  clientTLS:
    extraCerts:
      - /config/secrets/certificates/cert.pem
observability:
  metrics:
    use: prometheus
    runtime: true
    prometheus:
      podLabels: # This is required if enabling network policies.
        app: prometheus
      port: 9090
      path: /metrics

Using the bsr.yaml Helm values file, install the Helm Chart for the cluster:

helm install bsr oci://us-docker.pkg.dev/buf-images-1/bsr/charts/bsr \
  --version $BSR_VERSION \
  --namespace=bsr \
  --values bsr.yaml

The BSR instance should now be up and running on https://<host>.