Kafka
What Is Kafka?
Apache Kafka is a distributed event streaming platform that enables high-throughput, fault-tolerant handling of real-time data feeds. It's designed to handle large volumes of data with low-latency message delivery, making it ideal for building real-time data pipelines and streaming applications.
Product Type: Event Streaming Platform
Integration Type: Starter Kit
Capabilities
- Configurable batch size for optimizing throughput and latency
- SASL/PLAIN and SASL/SCRAM authentication (SHA-256 and SHA-512)
- Automatic connection pooling and management
- Optional TLS encryption for secure connections
Limitations
- Requires explicit broker list configuration; no automatic broker discovery
- Does not handle topic creation or management; topics must be pre-configured
- Subject to Kafka's own message size limits
Setup
Kafka Cluster Requirements
Before configuring the MetaRouter integration, ensure you have a running Kafka cluster with:
- Accessible broker endpoints
- Pre-configured topics
- Appropriate authentication credentials (if your cluster requires SASL authentication)
- Appropriate retention policies, partition count, and replication factor configured for your topics
Authentication Setup
SASL/PLAIN: Configure a username and password and set appropriate ACLs for the user.
SASL/SCRAM: Choose between SHA-256 or SHA-512, configure a username and password, and set appropriate ACLs for the user.
Adding a Kafka Integration in MetaRouter
From the integration library, add a Kafka integration and fill out the connection parameters below.
Connection Parameters
| Parameter | Required | Description |
|---|---|---|
BROKERS | Yes | Comma-separated list of Kafka broker addresses (e.g., broker1:9092,broker2:9092) |
TOPIC | Yes | The Kafka topic to write messages to |
BATCH_SIZE | No | Number of messages to batch before sending (default: 100) |
COMPRESSION | No | Message compression codec: NONE_UNSPECIFIED, snappy, gzip, or uncompressed |
TLS | No | Enable TLS encryption for the connection |
USERNAME | If using SASL | SASL username configured in your Kafka cluster |
PASSWORD | If using SASL | SASL password for the specified username |
AUTH TYPE | If using SASL | SASL mechanism: PLAIN or SCRAM |
HASH FUNCTION | SCRAM only | Hash algorithm: SHA_256 or SHA_512 |
Brokers
Enter your Kafka broker endpoints as a comma-separated list.
Format: broker1:port,broker2:port,broker3:port
Topic
Enter the name of a pre-configured Kafka topic where events will be published. Topic creation and management must be handled outside of MetaRouter.
TLS (Optional)
Enables encryption in transit. Does not replace SASL authentication — for secure connections, enable both TLS and SASL. Disabled by default.
Batch Size (Optional)
The number of events to batch before sending to Kafka. Larger batches improve throughput but increase latency; smaller batches reduce latency but may reduce performance. Typically 10–1000 events per batch depending on event size and throughput requirements. Defaults to 100 if not specified.
Compression (Optional)
Select the message compression codec to use when publishing events to Kafka.
| Option | Description |
|---|---|
NONE_UNSPECIFIED | No compression specified; uses Kafka broker default |
snappy | Fast compression with a good balance of speed and size reduction |
gzip | Higher compression ratio; best for reducing message size at the cost of some speed |
uncompressed | Explicitly disables compression regardless of broker defaults |
Authentication (SASL)
SASL authentication is required only if your Kafka cluster uses SASL. If so, configure the following fields.
| Type | Description |
|---|---|
| Plain text (PLAIN) | Basic username/password. Use only with TLS enabled. |
| Encrypted (SCRAM) | Username/password with password hashing. More secure. |
When using SCRAM, select a hash algorithm — SHA-512 is recommended for stronger security.
Additional Kafka Documentation
Updated 2 days ago