Kafka

What Is Kafka?

Apache Kafka is a distributed event streaming platform that enables high-throughput, fault-tolerant handling of real-time data feeds. It's designed to handle large volumes of data with low-latency message delivery, making it ideal for building real-time data pipelines and streaming applications.


Product Type: Event Streaming Platform

Integration Type: Starter Kit


Capabilities

  • Configurable batch size for optimizing throughput and latency
  • SASL/PLAIN and SASL/SCRAM authentication (SHA-256 and SHA-512)
  • Automatic connection pooling and management
  • Optional TLS encryption for secure connections

Limitations

  • Requires explicit broker list configuration; no automatic broker discovery
  • Does not handle topic creation or management; topics must be pre-configured
  • Subject to Kafka's own message size limits

Setup

Kafka Cluster Requirements

Before configuring the MetaRouter integration, ensure you have a running Kafka cluster with:

  • Accessible broker endpoints
  • Pre-configured topics
  • Appropriate authentication credentials (if your cluster requires SASL authentication)
  • Appropriate retention policies, partition count, and replication factor configured for your topics

Authentication Setup

SASL/PLAIN: Configure a username and password and set appropriate ACLs for the user.

SASL/SCRAM: Choose between SHA-256 or SHA-512, configure a username and password, and set appropriate ACLs for the user.


Adding a Kafka Integration in MetaRouter

From the integration library, add a Kafka integration and fill out the connection parameters below.

Connection Parameters

ParameterRequiredDescription
BROKERSYesComma-separated list of Kafka broker addresses (e.g., broker1:9092,broker2:9092)
TOPICYesThe Kafka topic to write messages to
BATCH_SIZENoNumber of messages to batch before sending (default: 100)
COMPRESSIONNoMessage compression codec: NONE_UNSPECIFIED, snappy, gzip, or uncompressed
TLSNoEnable TLS encryption for the connection
USERNAMEIf using SASLSASL username configured in your Kafka cluster
PASSWORDIf using SASLSASL password for the specified username
AUTH TYPEIf using SASLSASL mechanism: PLAIN or SCRAM
HASH FUNCTIONSCRAM onlyHash algorithm: SHA_256 or SHA_512

Brokers

Enter your Kafka broker endpoints as a comma-separated list.

Format: broker1:port,broker2:port,broker3:port

Topic

Enter the name of a pre-configured Kafka topic where events will be published. Topic creation and management must be handled outside of MetaRouter.

TLS (Optional)

Enables encryption in transit. Does not replace SASL authentication — for secure connections, enable both TLS and SASL. Disabled by default.

Batch Size (Optional)

The number of events to batch before sending to Kafka. Larger batches improve throughput but increase latency; smaller batches reduce latency but may reduce performance. Typically 10–1000 events per batch depending on event size and throughput requirements. Defaults to 100 if not specified.

Compression (Optional)

Select the message compression codec to use when publishing events to Kafka.

OptionDescription
NONE_UNSPECIFIEDNo compression specified; uses Kafka broker default
snappyFast compression with a good balance of speed and size reduction
gzipHigher compression ratio; best for reducing message size at the cost of some speed
uncompressedExplicitly disables compression regardless of broker defaults

Authentication (SASL)

SASL authentication is required only if your Kafka cluster uses SASL. If so, configure the following fields.

TypeDescription
Plain text (PLAIN)Basic username/password. Use only with TLS enabled.
Encrypted (SCRAM)Username/password with password hashing. More secure.

When using SCRAM, select a hash algorithm — SHA-512 is recommended for stronger security.


Additional Kafka Documentation