Skip to content

bufstream.yaml#

The bufstream.yaml file defines configuration for a Bufstream broker. The Bufstream CLI can be instructed to use the configuration file with the -c flag.

Fields#

name#

string

The name of this Bufstream broker.

Names should be unique for each broker in the cluster. Defaults to the hostname. Do not store sensitive information in this field. The name may be stored in logs, traces, metrics, etc.

cluster#

string

The name of the cluster.

All brokers in the same cluster should have the same value. Do not store sensitive information in this field. The cluster path may be stored in keys, logs, traces, metrics, etc.

zone#

string

The location of the broker, e.g., the datacenter/rack/availability zone where the broker is running.

If unspecified, the broker will attempt to resolve an availability zone from the host's metadata service. Do not store sensitive information in this field. The zone may be stored in keys, logs, traces, metrics, etc.

observability#

ObservabilityConfig

Configuration of observability and debugging utilities exposed by the broker.

etcd#

EtcdConfig

If specified, the broker will use etcd as the metadata storage of the cluster.

postgres#

PostgresConfig

If specified, the broker will use Postgres as the metadata storage of the cluster.

spanner#

SpannerConfig

If specified, the broker will use Google Cloud Spanner as the metadata storage of the cluster.

in_memory#

bool

If true, the broker will use an in-memory cache for metadata storage.

This option is intended for local use and testing, and only works with single broker clusters.

auto_migrate_metadata_storage#

bool

If true, the broker will run migrations for the metadata storage on startup.

storage#

StorageConfig

The data storage configuration.

kafka#

KafkaConfig

Configuration for the Kafka interface.

data_enforcement#

DataEnforcementConfig

Configuration for data enforcement via schemas of records flowing in and out of the broker.

iceberg#

IcebergConfig

Configuration for Iceberg integration, for exposing Kafka topics as tables in Apache Iceberg v2 format.

labels#

map<string, string>

Labels associated with the Bufstream broker.

Labels may appear in logs, metrics, and traces.

connect_address#

HostPort

The address to listen on for inter-broker Connect RPCs.

By default, brokers bind to a random, available port on localhost.

admin_address#

HostPort

The address to listen on for Admin RPCs.

admin_tls#

TLSListenerConfig

If populated, enables and enforces TLS termination on the Admin RPCs server.

data_dir#

string

Root directory where data is stored when the embedded etcd server is used or the storage provider is LOCAL_DISK. In all other cases, bufstream does not write data to disk.

The default for Darwin and Linux is $XDG_DATA_HOME/bufstream if $XDG_DATA_HOME is set, otherwise $HOME/.local/share/bufstream.

If Bufstream supports Windows in the future, the default will be %LocalAppData%\bufstream.

Sub-Messages#

ObservabilityConfig#

Configuration for observability primitives

log_level#

Level

log level, defaults to INFO

metrics#

MetricsConfig

Configuration for metrics.

debug_address#

HostPort

If configured, pprof and prometheus exported metrics will be exposed on this address.

traces#

TracesConfig

Configuration for traces.

exporter#

ExporterDefaults

Default values for metrics and traces exporters.

sensitive_information_redaction#

Redaction

Redact sensitive information such as topic names, before adding to metrics, traces and logs.

EtcdConfig#

Configuration options specific to etcd metadata storage.

addresses#

list<HostPort>

The etcd node addresses.

Currently, Bufstream assumes no path-prefix when connecting to the etcd cluster.

PostgresConfig#

Configuration options specific to postgres metadata storage.

dsn#

DataSource (required)

DSN is the data source name or database URL used to configure connections to the database.

cloud_sql_proxy#

CloudSQLProxy

Configuration to connect to a Cloud SQL database. If set, the database will be dialed via the proxy.

pool#

PostgresDBConnectionPool

Configuration settings for the database connection pool.

SpannerConfig#

Configuration options specific to Spanner metadata storage.

project_id#

string (required)

The Spanner project ID.

instance_id#

string (required)

The Spanner instance ID.

database_name#

string

The Spanner database name.

StorageConfig#

Configuration options specific to data storage.

provider#

Provider

The data storage provider.

If unspecified, a provider is automatically resolved with the following heuristics:

  • If bucket is set, we attempt to resolve metadata from the host
    • If the AWS metadata service responds, we assume S3
    • Otherwise, we assume GCS
  • If in_memory is set on the root configuration, we assume INLINE
  • Otherwise, we assume LOCAL_DISK

region#

string

The region in which the bucket exists.

This field defaults to the region of the broker's host.

bucket#

string

The object storage bucket where data is stored.

This field is required for GCS and S3 providers.

directory_bucket#

bool

If the bucket is a directory bucket.

A directory bucket does not sort objects by path and only supports prefixes ending in /. See https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html

prefix#

string

The path prefix of objects stored in the data storage.

Defaults to bufstream/.

This field is only used by the GCS and S3 providers.

endpoint#

string

The provider's HTTPS endpoint to use instead of the default.

force_path_style#

bool

Enable path-based routing (instead of subdomains) for buckets.

access_key_id#

DataSource

Specifies the AWS access key ID for authentication to the bucket.

By default, authentication is performed using the metadata service of the broker's host. If set, secret_access_key must also be provided.

secret_access_key#

DataSource

Specifies the AWS secret access key for authentication to the bucket.

By default, authentication is performed using the metadata service of the broker's host. If set, access_key_id must also be provided.

KafkaConfig#

Configuration options specific to the broker's Kafka interface

address#

HostPort

The address the Kafka server should listen on.

Defaults to a random available port on localhost.

public_address#

HostPort

The public address clients should use to connect to the Kafka server, if different from address.

tls#

TLSListenerConfig

If populated, enables and enforces TLS termination on the Kafka server.

fetch_eager#

bool

If a fetch should return as soon as any records are available.

When false, fetch wait for every topic/partition to be queried. When true, fetch returns as soon as any topic/partition has records, and the rest are fetched in the background under the assumption the client will try to fetch them in a subsequent request.

Dynamically configurable as bufstream.kafka.fetch.eager.

fetch_sync#

bool

If fetches from different readers should be synchronized to improve cache hit rates.

Dynamically configurable as bufstream.kafka.fetch.sync.

produce_concurrent#

bool

If records from a producer to different topic/partitions may be sequenced concurrently instead of serially.

Dynamically configurable as bufstream.kafka.produce.concurrent.

zone_balance_strategy#

BalanceStrategy

How to balance clients across zones, when then client does not specify a zone.

Dynamically configurable as bufstream.kafka.zone.balance.strategy.

partition_balance_strategy#

BalanceStrategy

How to balance topic/partitions across bufstream brokers.

Dynamically configurable as bufstream.kafka.partition.balance.strategy.

num_partitions#

int32

The default number of partitions to use for a new topic.

Dynamically configurable as num.partitions.

authentication#

AuthenticationConfig

If populated, enables and enforces authentication.

DataEnforcementConfig#

Configuration of data enforcement policies applied to records.

schema_registries#

list<SchemaRegistry>

The schema registries used for data enforcement.

produce#

list<DataEnforcementPolicy>

Policies to attempt to apply to produce requests. The first policy that matches the topic will be used. If none match, no data enforcement will occur.

fetch#

list<DataEnforcementPolicy>

Policies to attempt to apply to fetch responses. The first policy that matches the topic will be used. If none match, no data enforcement will occur.

IcebergConfig#

Configuration of Iceberg integration settings, for archiving Kafka topic data to Iceberg tables.

catalogs#

list<IcebergCatalog>

The catalogs that host Iceberg table metadata.

HostPort#

A network host and optional port pair.

host#

string

A hostname or IP address to connect to or listen on.

port#

uint32

The associated port. If unspecified, refer to the field documentation for default behavior.

TLSListenerConfig#

TLSListenerConfig is TLS/SSL configuration options for servers. At least one certificate must be specified.

certificates#

list<Certificate>

Certificates to present to the client. The first certificate compatible with the client's requirements is selected automatically.

client_auth#

Type

Declare the policy the server will follow for mutual TLS (mTLS).

client_cas#

list<DataSource>

The PEM-encoded certificate authorities used by the server to validate the client certificates. This field cannot be empty if client_auth performs verification.

MetricsConfig#

Configuration for metrics.

exporter_type#

ExporterType

The type of exporter to use.

address#

string

The endpoint for OTLP exporter, with a host name and an optional port number. If this is not set, it falls back to observability.exporter.address. If that is not set, it falls back to OTEL's default behavior, using the the host and port of OTEL_EXPORTER_OTLP_METRICS_ENDPOINT, if not found then OTEL_EXPORTER_OTLP_ENDPOINT and finally localhost:4318 for OTLP_HTTP or locahost:4317 for OTLP_GRPC.

For OTLP_HTTP, metrics.path will be appended to this address.

path#

string

This url path used by the OTLP_HTTP exporter, this defaults to "/v1/metrics". This is appended to the host and port of the endpoint that the exporter connects to.

insecure#

bool

If set to true, TLS is disabled for the OTLP exporter.

enable_labels#

map<string, LabelValueList>

A map from label name to the allowed list of values for the label.

Labels are custom key-value pairs that are added to logs, metrics, and traces.

Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.

Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. Keys must start with a lowercase letter or international character.

Labels can be specified in Kafka client ids (e.g. "my-client-id;label.foo=bar") or in topic configuration.

Only labels in this list are added to metrics. If not set, no labels are added to metrics.

aggregation#

Aggregation

This option, typically set to reduce cardinality, aggregates some metrics over certain attributes, such as kafka.topic.name.

TracesConfig#

Configuration for traces.

exporter_type#

ExporterType

The type of exporter to use.

address#

string

The endpoint for OTLP exporter, with a host name and an optional port number. If this is not set, it falls back to observability.exporter.address. If that is not set, it falls back to the OTEL's default behavior, using the host and port of OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, if not found then OTEL_EXPORTER_OTLP_ENDPOINT and finally localhost:4318 for OTLP_HTTP or localhost:4317 for OTLP_GRPC.

For OTLP_HTTP, traces.path will be appended to this address.

path#

string

This url path used by the OTLP_HTTP exporter, this defaults to "/v1/traces". This is appended to the host and port of the endpoint that the exporter connects to.

insecure#

bool

If set to true, TLS is disabled for the OTLP exporter.

trace_ratio#

float64

OpenTelemetry trace sample ratio, defaults to 1.

ExporterDefaults#

Default configuration for metrics and traces exporters.

address#

string

The default base address used by OTLP_HTTP and OTLP_GRPC exporters, with a host name and an optional port number. For OTLP_HTTP, "/v1/{metrics, traces}" will be appended to this address, unless the path is overridden by metrics.path or traces.path. If port is unspecified, it defaults to 4317 for OTLP_GRPC and 4318 for OTLP_HTTP.

insecure#

bool

If set to true, TLS is disabled for the OTLP exporter. This can be overwritten by metrics.insecure or traces.insecure.

DataSource#

Configuration values sourced from various locations.

path#

string

A file path to the data relative to the current working directory. Trailing newlines are stripped from the file contents.

env_var#

string

An environment variable containing the data.

string#

string

An inline string of the data.

bytes#

base64-bytes

An inline byte blob of the data.

encoding#

Encoding

The encoding of the data source value. Defaults to PLAINTEXT.

CloudSQLProxy#

Configuration options specific to the Cloud SQL Proxy.

icn#

string (required)

ICN is the Cloud SQL instance's connection name, typically in the format "project-name:region:instance-name".

iam#

bool

Use IAM auth to connect to the Cloud SQL database.

private_ip#

bool

Use private IP to connect to the Cloud SQL database.

PostgresDBConnectionPool#

Configuration settings for the PostgreSQL connection pool.

max_connections#

int32

The maximum size of the connection pool. Defaults to 20.

min_connections#

int32

The minimum size of the connection pool. Defaults to 0.

AuthenticationConfig#

sasl#

SASLConfig

mtls#

MutualTLSAuthConfig

If set, will use the configured mTLS for authentication.

This acts as a fallback if SASL is also enabled.

max_receive_bytes#

int64

The maximum receive size allowed before and during initial authentication. Default receive size is 512KB. Set to -1 for no limit.

SchemaRegistry#

A single schema registry used in data enforcement.

name#

string

Name of this registry, used to disambiguate multiple registries used across policies.

confluent#

CSRConfig

Confluent Schema Registry

DataEnforcementPolicy#

A set of policies to apply data enforcement rules on records flowing into or out Kafka.

topics#

StringMatcher

Apply these policies only if the topic of the record(s) matches. If no topics are specified, the policy will always be applied.

schema_registry#

string (required)

The schema registry to use for retrieving schemas for this policy.

keys#

Element

The policy to apply to a record's key. If unset, enforcement will not occur.

values#

Element

The policy to apply to a record's value. If unset, enforcement will not occur.

IcebergCatalog#

A single catalog server, used to maintain an Iceberg table by updating its schema and adding and removing data files from the table.

name#

string

Name of this catalog, used to disambiguate multiple catalogs used across topics and tables.

rest#

RESTCatalogConfig

REST catalog. Valid table names must be in the form "namespace.table". The namespace may contain multiple components such as "ns1.ns2.ns3.table". The underlying catalog implementation that provides the REST API may impose further constraints on table and namespace naming.

Also see https://github.com/apache/iceberg/blob/main/open-api/rest-catalog-open-api.yaml

bigquery_metastore#

BigQueryMetastoreConfig

Google Cloud BigQuery Metastore. Valid table names must be in the form "dataset.table". This catalog is still in Preview/Beta but is expected to eventually replace usages of Google Cloud BigLake Metastore.

aws_glue_data_catalog#

AWSGlueDataCatalogConfig

AWS Glue Data Catalog. Valid table names must be in the form "database.table".

Certificate#

A certificate chain and private key pair.

chain#

DataSource (required)

The PEM-encoded leaf certificate, which may contain intermediate certificates following the leaf certificate to form a certificate chain.

private_key#

DataSource (required)

The PEM-encoded (unencrypted) private key of the certificate chain.

Aggregation#

Configuration for metrics aggregation, taking precedence over sensitive information redaction.

topics#

bool

Aggregate metrics across all topics to avoid cardinality issues with clusters with a large number of topics. Metrics that support this aggregation will report the kafka.topic.name attribute as _all_topics_. NOTE: This implies partitions aggregation, which omits metrics like bufstream.kafka.topic.partition.offset.high_water_mark.

partitions#

bool

Aggregate metrics across all partitions to avoid cardinality issues with clusters with a large number of partitions. Metrics that support aggregation will report the kafka.partition.id attribute as -1, while some metrics, such as bufstream.kafka.topic.partition.offset.high_water_mark will be omitted if partition level aggregation is enabled.

consumer_groups#

bool

Aggregate metrics across all consumer groups to avoid cardinality issues with clusters with a large number of groups. Metrics that support aggregation will report the kafka.consumer.group.id as _all_groups_, while some metrics such as bufstream.kafka.consumer.group.generation will be omitted if consumer group level aggregation is enabled.

principal_ids#

bool

Aggregate metrics across all authentication principals to avoid cardinality issues with clusters with a large number of principals. Metrics that support aggregation will report the authentication.principal_id as _all_principal_ids_.

SASLConfig#

plain#

PlainMechanism

Configuration for the PLAIN mechanism. See https://datatracker.ietf.org/doc/html/rfc4616.

anonymous#

bool

Whether to accept ANONYMOUS as a mechanism. Not recommended. See https://datatracker.ietf.org/doc/html/rfc4505.

scram#

SCRAMMechanism

Configuration for the SCRAM-* mechanisms. See https://datatracker.ietf.org/doc/html/rfc5802.

oauth_bearer#

OAuthBearerMechanism

Configuration for the OAUTHBEARER mechanism.

MutualTLSAuthConfig#

principal_source#

PrincipalSource

Where to extract the principal from the client certificate.

CSRConfig#

Configuration for the Confluent Schema Registry (CSR) API.

url#

string

Root URL (including protocol and any required path prefix) of the CSR API.

instance_name#

string

Name of the CSR instance within the BSR. This name is used to disambiguate subjects of the same name within the same schema file. Used exclusively for schema coercion.

tls#

TLSDialerConfig

TLS configuration. If unset and the url field specifies https, a default configuration is used.

basic_auth#

BasicAuth

Authenticate against the CSR API using basic auth credentials

StringMatcher#

Provides match rules to be applied to string values

invert#

bool

Inverts the matching behavior (effectively "not").

all#

bool

Matches all values; useful as a catch-all.

equal#

string

Matches case-sensitively.

in#

StringSet

Matches case-sensitively any of the values in the set.

Element#

Rules applied to either the key or value of a record.

name_strategy#

SubjectNameStrategy

The strategy used to associate this element with the subject name when looking up the schema.

coerce#

bool

If the element is not wrapped in the schema registries expected format and a schema is associated with it, setting this field to true will attempt to resolve a schema for the element and wrap it correctly.

on_internal_error#

Action

The action to perform for internal errors (e.g., unavailability of the schema registry). If unset, the default behavior is REJECT_BATCH in produce and PASS_THROUGH in fetch.

on_no_schema#

Action

The action to perform for elements that do not have a schema associated with them. If skip_parse is true, this action will apply if the message is not in the appropriate schema wire format. If unset, the default behavior is PASS_THROUGH.

skip_parse#

bool

If true, will skip verifying that the schema applies to the element's contents. If set with coerce, coerced messages will be identified as the latest version of the element's schema and may be erroneous. Setting this field is mutually exclusive with validation and redaction.

on_parse_error#

Action

The action to perform for elements that fail to parse with their associated schema. Fetch policies should not REJECT_BATCH to avoid blocking consumers.

validation#

ValidationPolicy

If set, parsed messages will have semantic validation applied to them based off their schema.

redaction#

RedactPolicy

If set, parsed messages will have the specified fields redacted. For produce, this will result in data loss.

RESTCatalogConfig#

Configuration for the REST Iceberg catalog API.

url#

string

Root URL (including protocol and any required path prefix) of the catalog server.

uri_prefix#

string

Optional URI prefix. This is separate from any URI prefix present in url. This prefix appears after the "/v1/" API path component but before the remainder of the URI path.

warehouse#

string

Optional warehouse location. Some REST catalogs require this property in the client's initial configuration requests.

tls#

TLSDialerConfig

TLS configuration. If unset and the url field specifies https, a default configuration is used.

basic_auth#

BasicAuth

Authenticate against the Iceberg catalog using basic auth credentials.

bearer_token#

DataSource

Authenticate against the Iceberg catalog with the given static bearer token (which could be a long-lived OAuth2 token).

oauth2#

OAuth2Config

Authenticate against the Iceberg catalog with the given OAuth2 configuration.

BigQueryMetastoreConfig#

Configuration for using BigQuery Metastore as an Iceberg catalog.

project#

string

The GCP project of the BigQuery Metastore. If empty, this is assumed to be the current project in which the bufstream workload is running.

location#

string

The location for any BigQuery datasets that are created. Must be present if cloud_resource_connection is present. Otherwise, if absent, datasets cannot be auto-created, so any dataset referenced by an Iceberg table name must already exist.

cloud_resource_connection#

string

The name of a BigQuery Cloud Resource connection. This is only the simple name of the connection, not the full name. Since a BigQuery dataset can only use connections in the same project and location, the full connection name (which includes its project and location) is not necessary.

If absent, no override connection will be associated with created tables.

AWSGlueDataCatalogConfig#

Configuration for using AWS Glue Data Catalog as an Iceberg catalog.

aws_account_id#

string

The AWS account ID of the AWS Glue catalog.

This is normally not necessary as it defaults to the account ID for the IAM user of the workload. But if the workload's credentials are not those of an IAM user or if the Glue catalog is defined in a different AWS account, then this must be specified.

region#

string

The AWS region to indicate in the credential scope of the signature.

This field defaults to the region of the broker's host.

access_key_id#

DataSource

Specifies the AWS access key ID for authentication to the resource.

By default, authentication is performed using the metadata service of the broker's host. If set, secret_access_key must also be provided.

secret_access_key#

DataSource

Specifies the AWS secret access key for authentication to the resource.

By default, authentication is performed using the metadata service of the broker's host. If set, access_key_id must also be provided.

session_token#

DataSource

Specifies the AWS session token when using AWS temporary credentials to access the cloud resource. Omit when not using temporary credentials.

Temporary credentials are not recommended for production workloads, but can be useful in development and test environments to authenticate local processes with remote AWS resources.

This value should only be present when access_key_id and secret_access_key are also set.

LabelValueList#

values#

list<string>

The list of values to allow for the label.

If this is not set, all values are allowed.

PlainMechanism#

credentials#

list<BasicAuth>

SCRAMMechanism#

admin_credentials#

SCRAMCredentials (required)

The admin's credentials boostrapped.

OAuthBearerMechanism#

static#

DataSource

Static JWKS file or content.

remote#

HttpsEndpoint

An endpoint serving JWKS that is periodically refreshed.

audience#

string

If provided, will match the 'aud' claim to this value.

issuer#

string

If provided, will match the 'iss' claim to this value.

TLSDialerConfig#

TLSDialerConfig is TLS/SSL configuration options for clients. The empty value of this message is a valid configuration for most applications.

insecure_skip_verify#

bool

Controls whether a client verifies the server's certificate chain and host name. If true, the dialer accepts any certificate presented by the server and host name in that certificate. In this mode, TLS is susceptible to machine-in-the-middle attacks and should only be used for testing.

BasicAuth#

Basic Authentication username/password pair.

username#

DataSource (required)

The source of the basicauth username.

password#

DataSource (required)

The source of the basicauth password.

StringSet#

Effectively a wrapped repeated string to accomodate usage in a oneof or differentiating a null and empty list.

values#

list<string>

ValidationPolicy#

The semantic validation rules applied to parsed elements during data enforcement.

on_error#

Action

The action to perform if the element fails semantic validation defined in the schema. Fetch policies should not REJECT_BATCH to avoid blocking consumers.

RedactPolicy#

The redaction rules applied to parsed elements during data enforcement.

fields#

StringMatcher

Strip fields with matching names.

debug_redact#

bool

Strip fields from the element annotated with the debug_redact field option (proto only).

shallow#

bool

By default, fields will be redacted recursively in the message. If shallow is set to true, only the top level fields will be evaluated.

OAuth2Config#

Configuration for a client using OAuth2 to generate access tokens for authenticating with a server.

token_endpoint_url#

string

The URL of the token endpoint, used to provision access tokens for use with requests to the catalog. If not specified, this defaults to the catalog's base URL with "v1/oauth/tokens" appended to the URI path, which matches the URI of the endpoint as specified in the Iceberg Catalog's OpenAPI spec.

scope#

string

The scope to request when provisioning an access token. If not specified, defaults to "catalog".

client_id#

DataSource (required)

The credentials used to authenticate to the token endpoint.

client_secret#

DataSource (required)

The credentials used to authenticate to the token endpoint.

tls#

TLSDialerConfig

Optional alternate TLS configuration for the token endpoint. If not specified, accessing the token endpoint will use the same TLS configuration as used for accessing other REST catalog endpoints. (See RESTCatalogConfig.tls).

SCRAMCredentials#

username#

DataSource (required)

hash#

HashFunction

plaintext#

DataSource

salted#

SaltedPassword

HttpsEndpoint#

url#

DataSource

A HTTPS url for the JWKS file

refresh_interval#

duration

The keys are loaded from the URL once on startup and cached. This controls the cache duration.

Defaults to an hour. Set to a negative number to never refresh.

tls#

TLSDialerConfig

TLS configuration. If unset, a default configuration is used.

SaltedPassword#

salted_password#

DataSource (required)

salt#

DataSource (required)

iterations#

uint32

Enums#

Level#

DEBUG#

INFO#

WARN#

ERROR#

Redaction#

Redact sensitive information such as topic names, before adding to to metrics, traces and logs.

NONE#

This shows sensitive information as is. For example, topic names will be included as attributes in metrics.

OPAQUE#

This shows sensitive information as opaque strings. For example, topic IDs (UUIDs) will be included, instead of topic names.

Provider#

The provider options for data storage.

S3#

AWS S3 or S3-compatible service (e.g., LocalStack)

GCS#

GCP GCS service

LOCAL_DISK#

Local, on-disk storage

This option is for debugging purposes and should only be used by clusters that share the same filesystem.

INLINE#

Use metadata storage (e.g., in_memory or etcd).

This option should only be used for testing purposes.

AZURE#

Azure Blob Storage

BalanceStrategy#

Balance strategies for distributing client connections and partition assignments within the cluster.

BALANCE_STRATEGY_PARTITION#

Balance based on a hash of the partition ID

BALANCE_STRATEGY_HOST#

Balance based on a hash of the host name

BALANCE_STRATEGY_CLIENT_ID#

Balance based on a hash of the client ID

Type#

NO_CERT#

No client certificate will be requested during the handshake. If any certificates are sent, they will not be verified.

REQUEST_CERT#

Server will request a client certificate during the handshake, but does not require that the client send any certificates. Any certificates sent will not be verified.

REQUIRE_CERT#

Server requires clients to send any certificate during the handshake, but the certificate will not be verified.

VERIFY_CERT_IF_GIVEN#

Server will request a client certificate during the handshake, but does not require that the client send any certificates. If the client does send a certificate, it must be valid.

REQUIRE_AND_VERIFY_CERT#

Server will request and require clients to send a certificate during the handshake. The certificate is required to be valid.

ExporterType#

NONE#

STDOUT#

OTLP_HTTP#

OTLP_GRPC#

PROMETHEUS#

ExporterType#

NONE#

STDOUT#

OTLP_HTTP#

OTLP_GRPC#

Encoding#

PLAINTEXT#

Value is treated as-is.

BASE64#

Value is treated as standard RFC4648 (not URL) base64-encoded with '=' padding.

PrincipalSource#

ANONYMOUS#

Always set the principal to User:Anonymous, even if client doesn't provide a certificate.

SUBJECT_COMMON_NAME#

The authenticated principal is the same as subject common name (CN) of the client certificate.

SAN_DNS#

The authenticated principal is the first DNS Subject Alt Name.

SAN_URI#

The authenticated principal is the first URI Subject Alt Name.

SubjectNameStrategy#

The strategy used to create the identifier (subject) used to lookup the schema of a record. Typically the strategy is derived from the topic name and which element (key or value) of the record is being deserialized.

TOPIC_NAME_STRATEGY#

The default Confluent Schema Registry strategy, of the form "-".

Action#

The action to perform when an error occurs.

PASS_THROUGH#

Log and emit metrics on failure, but allow the record and its batch to pass through regardless. Useful for testing a new policy before rolling out to production.

REJECT_BATCH#

Rejects the record batch containing the error, returning an error to the caller. This action should not be used with fetch responses, as rejecting batches on the fetch side will result in blocked consumers.

FILTER_RECORD#

Filters out the record from the batch, while preserving the rest of the data. Note that this will result in data loss if used on the producer side. On the consumer side, invalid records will be skipped.

HashFunction#

SHA256#

SHA512#