Installation
The BSR is designed to run on Kubernetes, and is distributed as a Helm Chart and accompanying Docker images through an OCI registry. The Helm Chart and Docker images are versioned, and are expected to be used together. The default values in the Chart use Docker images with the same version as the Chart itself.
Please review the list of BSR dependencies before getting started.
1. Authenticate helm
To get started, authenticate helm
with the Buf OCI registry using the keyfile that was sent alongside this documentation.
The keyfile should contain a base64 encoded string.
$ cat keyfile | helm registry login -u _json_key_base64 --password-stdin \
https://us-docker.pkg.dev/buf-images-1/bsr
2. Create a namespace
Create a Kubernetes namespace in the k8s cluster for the bsr
Helm Chart to use:
3. Create a pull secret
Create a pull secret using the same provided keyfile from step 1. The cluster uses it to pull images from the Buf OCI registry:
$ kubectl create secret --namespace bsr docker-registry bufpullsecret \
--docker-server=us-docker.pkg.dev/buf-images-1/bsr \
--docker-username=_json_key_base64 \
--docker-password="$(cat keyfile)"
4. Configure the BSR’s Helm values
The BSR is configured using Helm values through the bsr
Helm Chart.
Create a file named bsr.yaml
to store the Helm values, which is required by the helm install
step below.
Note
This file can be in any location, but we recommend creating it in the same directory where the helm commands are run.
Set the desired host
and configure the chart to use the image pull secret (created above):
host: example.com # Hostname that the BSR will be served from
imagePullSecrets:
- name: bufpullsecret # The image pull secret that was created above
Put the values from the steps below in the bsr.yaml
file.
You can skip to Install the Helm Chart for a full example Helm chart.
Configure object storage
The BSR requires either S3-compatible object storage, Azure Blob Storage, or GCS.
S3
The bufd client and oci-registry will attempt to acquire credentials from the environment. To configure the storage set the following Helm values, filling in your S3 variables:
Alternatively, you may instead use an access key pair.
-
Add the
accessKeyId
to the configuration: -
Create a k8s secret containing the s3 access secret key:
Azure Blob Storage
A standard general storage account type is required in order to support block blobs.
The bufd
client and oci-registry
will attempt to acquire credentials from the environment.
To configure the storage, set the following Helm values by filling in your Azure variables and adding the required annotations for the bufd
and ociregistry
service accounts and deployments:
storage:
use: azure
azure:
accountName: "my-storage-account-name"
container: "my-container"
useAccountKey: false
bufd:
serviceAccount:
annotations:
azure.workload.identity/client-id: "my-client-id"
deployment:
podLabels:
azure.workload.identity/use: "true"
ociregistry:
serviceAccount:
annotations:
azure.workload.identity/client-id: "my-client-id"
deployment:
podLabels:
azure.workload.identity/use: "true"
The service accounts to be bound to the federated identity credentials are named bufd-service-account
and oci-registry-service-account
.
Alternatively, you may instead use the storage account key.
-
Set the required helm values:
-
Create a k8s secret containing an Azure storage account key:
GCS
Workload Identity Federation
To allow access to the GCS bucket, the bufd
and oci-registry
services require Workload Identity Federation to be configured, with a GCP service account attached to the pods. To configure the storage, set the following Helm values, filling in your GCS bucket name and GCP service account:
storage:
use: gcs
gcs:
bucketName: <bucket name>
bufd:
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: <gcp-service-account-name>@<gcp project>.iam.gserviceaccount.com
ociregistry:
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: <gcp-service-account-name>@<gcp project>.iam.gserviceaccount.com
With this configuration, the helm chart creates two k8s service accounts that need to be bound to the GCP Service account: bufd-service-account
and oci-registry-service-account
.
You also need to grant roles/storage.objectAdmin
permissions on the GCS bucket to the GCP service account.
Create a Postgres database
The BSR requires a PostgreSQL database.
The BSR postgres
user requires full access to the database, and additionally must be able to create the pgcrypto
and pg_trgm
extensions.
To configure Postgres, set the following helm values:
Then create a k8s secret containing the postgres user password:
$ kubectl create secret --namespace bsr generic bufd-postgres \
--from-literal=password=<postgres password>
Note that if you're using CosmosDB, it must be configured as a single-node cluster with high availability (HA) enabled.
You can configure the BSR to use GCP Cloud SQL IAM Authentication. The GCP service account needs the following permissions to connect to Cloud SQL properly:
Bufd also creates extensions as part of the migrations it runs in postgres, so you need to run the following command from an existing GCP Cloud SQL Superuser on the postgres shell:
Finally, if you were already running the BSR with another existing Cloud SQL user (eg, postgres
),
you need to reassign all ownerships:
GRANT "<gcp-service-account-name>@<gcp project>.iam" TO postgres;
REASSIGN OWNED by "postgres" TO "<gcp-service-account-name>@<gcp project>.iam";
REVOKE "<gcp-service-account-name>@<gcp project>.iam" FROM postgres;
Your configuration then looks like this:
postgres: │
cloudSqlInstance: <gcp project>:<gcp region>:<gcp cloud sql instance name>
database: postgres │
user: <gcp-service-account-name>@<gcp project>.iam
# Optional, if you need
# impersonateServiceAccount: <gcp-service-account-to-impersonate-name>@<gcp project>.iam.gserviceaccount.com │
Please refer to the GCP docs for more details on the setup.
Configure Redis
The BSR requires a Redis instance.
- Only the Redis Standalone deployment mode is supported.
- Redis Cluster and Sentinel modes are not supported for the BSR.
To configure Redis, create a k8s secret containing the address:
$ kubectl create secret --namespace bsr generic bufd-redis \
--from-literal=address=redis.example.com:6379
Optionally, authentication and TLS for Redis are also supported. These can be set with the following Helm values:
redis:
# Set to true to enable auth for redis.
# The auth token will be read from the "auth" field in the "bufd-redis" secret
auth: true
tls:
# Whether to use TLS for connecting to Redis
# Set to "false" to disable TLS
# Set to "local" to use certs from the "ca" field in the "bufd-redis" secret
# Set to "system" to use the system trust store
use: "false"
- If authentication is enabled, the redis auth string should be added to the
bufd-redis
secret in theauth
field. - If TLS is enabled and
use
is set tolocal
, the CA certificate(s) to trust should be added to thebufd-redis
secret in theca
field.
Example of a secret containing both an authentication token and a CA certificate:
$ kubectl create secret --namespace bsr generic bufd-redis \
--from-literal=address=redis.example.com:6379 \
--from-literal=auth=<redis auth string> \
--from-file=ca=<redis ca.crt>
Example of a secret containing an authentication token, assuming a connection string like
redis.example.com:6379,password=<password>,ssl=True,abortConnect=False
:
$ kubectl create secret --namespace bsr generic bufd-redis \
--from-literal=address=redis.example.com:6379 \
--from-literal=auth=<redis password>
Configure authentication
The BSR supports authentication using an external identity provider (IdP), through Security Assertion Markup Language (SAML) or OpenID Connect (OIDC).
In the SAML IdP, create a new application to represent the BSR. It should return a single sign-on URL and IdP metadata. Either a public URL or raw XML can be specified for the SAML config. If SAML is being configured in Okta, please follow our Okta - SAML guide.
To configure SAML authentication in the BSR, set the following Helm values:
auth:
method: saml
saml:
# Endpoint where the XML metadata is available
idpMetadataURL: "https://example-provider.com/app/12345/sso/saml/metadata"
# If the authentication provider doesn't have a metadata url,
# the raw XML metadata can be configured using the idpRawMetadata,
# value instead.
idpRawMetadata: |
<?xml version="1.0" encoding="utf-8"?>
<EntityDescriptor etc>
# Optionally, configure the attribute containing groups membership information,
# to enable support for automated organization membership provisioning.
# Note that if configured, a user will not be permitted to log in to the BSR if the attribute is missing from the SAML assertion.
# https://buf.build/docs/bsr/private/user-lifecycle#autoprovisioning
groupsAttributeName: ""
# Optional
# A list of emails which will be granted server admin permissions on login
# Note that this list is case-sensitive
autoProvisionedAdminEmails:
- "user@example.com"
SAML requires the application to have access to a certificate used for signing/encryption as part of the authentication process.
For the BSR, this is stored as a Kubernetes TLS secret named bsr-saml-cert
, and may be self-signed.
For example, you can generate a certificate and create the required secret using OpenSSL.
In the OIDC IdP, create a new application to represent the BSR and provide the callback URL. If OIDC is being configured in Okta, please follow our Okta - OIDC guide.
To configure OIDC authentication in the BSR, set the following Helm values:
auth:
method: oidc
oidc:
issuerURL: "https://example.okta.com"
clientID: "0oa2ho2ylo0HFI61d5d7"
# Optional
# A list of emails which will be granted server admin permissions on login
# Note that this list is case-sensitive
autoProvisionedAdminEmails:
- "user@example.com"
Additionally, a Kubernetes secret must be created for OIDC to function:
Configure Ingress
The BSR uses a Kubernetes Ingress resource to handle incoming traffic and for terminating TLS.
The domain used here must match the host
set in the Helm values above.
Warning
TLS is required for the BSR to function properly. HTTP2 is preferred to allow for gRPC support.
bufd:
ingress:
enabled: true
className: "" # Optional ingress class to use
annotations: {} # Optional ingress annotations
hosts:
- host: example.com
paths:
- path: /
portName: http
# Optional TLS configuration for the ingress.
# May be omitted to configure TLS termination, depending on the ingress.
# Requires a kubernetes TLS secret.
tls:
- secretName: bsr-tls-cert
hosts:
- example.com
If the load balancer doesn't support H2C, TLS can optionally be used for communication between the load balancer and the BSR by enabling TLS on the listening ports of the bufd
application.
This requires a Kubernetes TLS secret named bsr-tls-cert
.
bufd:
tls:
enabled: true
# Optional. Secret name for the TLS cert
# secretName: bsr-tls-cert
# Optional. Used to add annotations to the ingress service.
# May be needed for some ingress controllers to function correctly.
service:
annotations: {}
Configure observability
The metrics
block is used to configure the collection and exporting of metrics from your application using prometheus:
observability:
metrics:
use: prometheus
runtime: true
prometheus:
podLabels: # This is required if enabling network policies.
app: prometheus
port: 9090
path: /metrics
Trusting additional certificates
If you bump into issues regarding self signed certificates, such as seeing the error tls: failed to verify certificate: x509: certificate signed by unknown authority
, you can add your root certificates on the BSR.
To trust additional certificates, mount the files on the bufd
pod and include them in the client TLS configuration.
bufd:
deployment:
extraVolumeMounts:
- mountPath: /config/secrets/certificates/cert.pem
name: certificate
readOnly: true
subPath: cert.pem
extraVolumes:
- name: certificate
secret:
secretName: tls-cert
items:
- key: cert.pem
path: cert.pem
clientTLS:
extraCerts:
- /config/secrets/certificates/cert.pem
5. Install the Helm Chart
After following the steps above, the set of Helm values should be similar to the example below:
host: example.com
imagePullSecrets:
- name: bufpullsecret
storage:
use: s3
s3:
bucketName: "my-bucket-name"
endpoint: "s3.us-east-1.amazonaws.com"
region: "us-east-1"
postgres:
host: "postgres.example.com"
port: 5432
database: postgres
user: postgres
auth:
method: saml
saml:
idpMetadataURL: "https://example-provider.com/app/12345/sso/saml/metadata"
autoProvisionedAdminEmails:
- "user@example.com"
bufd:
ingress:
enabled: true
hosts:
- host: example.com
paths:
- path: /
portName: http
tls:
- secretName: bsr-tls-cert
hosts:
- example.com
deployment:
extraVolumeMounts:
- mountPath: /config/secrets/certificates/cert.pem
name: certificate
readOnly: true
subPath: cert.pem
extraVolumes:
- name: certificate
secret:
secretName: tls-cert
items:
- key: cert.pem
path: cert.pem
clientTLS:
extraCerts:
- /config/secrets/certificates/cert.pem
observability:
metrics:
use: prometheus
runtime: true
prometheus:
podLabels: # This is required if enabling network policies.
app: prometheus
port: 9090
path: /metrics
Using the bsr.yaml
Helm values file, install the Helm Chart for the cluster and set the correct BSR Version:
$ helm install bsr oci://us-docker.pkg.dev/buf-images-1/bsr/charts/bsr \
--version "1.x.x" \
--namespace=bsr \
--values bsr.yaml
The BSR instance should now be up and running on https://<host>
.
To help verify that the BSR is working correctly, we expose a status page to server admins at https://<host>/-/status
.
It's also accessible on port 3001 on each bufd pod without authentication, at http://<bufd pod ip>:3001/-/status
.
More information about the status page can be found here.