Firehose MetaRouter Setup

MetaRouter Setup

After completing the Firehose and Redshift setup, you can configure the Amazon Data Firehose integration in MetaRouter. Follow the steps below to complete the setup process.


Requirements

Before proceeding, ensure you have the following:

  1. AWS Access Key and Secret Key
    • These should have been recorded during the "Get Access Key" stage.
    • If you did not save them or have lost them, generate a new key by following the steps here (link to section).
  2. Firehose Stream Name
    • This should have been recorded during the "Create Firehose Stream" stage.
    • If not, you can find it in the Firehose Streams list on the Amazon Kinesis Data Firehose page.
  3. Region
    • Located in the top-right corner of the AWS Management Console.
    • This is the region where your Firehose stream was created, i.e. eu-north-1.

Adding an Amazon Data Firehose Integration

To integrate Firehose with MetaRouter:

  1. Open the Integration Library in MetaRouter.
  2. Add an Amazon Data Firehose integration.
  3. Fill out the Connection Parameters as follows:
Connection ParameterDescription
ACCESS_KEYAWS access key used for authentication.
REGIONAWS region where the Firehose stream is deployed.
SECRET_KEYAWS secret key used for secure API access.
STREAM_NAMEName of the Firehose stream receiving event data.
  1. Customize the Playbook (if needed) to match your Redshift table schema. The playbook must be compatible with your Redshift tables.
  2. Deploy the pipeline with the integration.
  3. Use the generated AJS file from the pipeline.
  4. Test the integration to verify that events are properly inserted into Redshift tables.

How to Test Your Integration

Note: Data ingestion may take time depending on your Firehose buffer settings.

  1. Open the Redshift Query Editor.

  2. Run the following query to check if data has been received in the staging table:

    sql
    CopyEdit
    SELECT * FROM firehose_staging; -- Check staging table
    
    
  3. If data is missing from the staging table, check your configured tables, as the Scheduled Query may have already moved the data. You can verify it using:

    sql
    CopyEdit
    SELECT * FROM events; -- Example of a table populated by the Scheduled Query
    
    
  4. If the query is successful, you should see incoming event data similar to the expected format.