Building & Managing Pipelines


Pipelines contain data produced primarily from a single source and route that data to various destinations according to your configurations. They segregate data so you can easily determine how data should be manipulated and which destinations should receive it.

Getting Started with Pipelines

From the MetaRouter Dashboard, in the left-hand navigation panel, select the Pipelines tab.

Build a new Pipeline

In the upper left of the page, click the New Pipeline button. On the popup, fill out the following:

Pipeline Name. This is the friendly name of your pipeline, which should contain helpful information for you to quickly identify the data it contains. This is especially important as your organization adds more pipelines. The name may designate which domain/app the pipeline is collecting data from, or whether it’s a development, staging or production environment.


Ensure that your organization follows a standard naming convention for pipelines to quickly identify pipelines in your workflows.

writeKey. The writeKey of the pipeline is its unique identifier and is added to the code of events that should be sent to the pipeline. Similar to pipeline friendly names, your organization should use a standard naming convention for writeKeys to limit confusion.

Assign a Cluster. This will assign the Pipeline to a cluster, which will handle data processing for the data being handled by the Pipeline. A Pipeline may only be assigned to one cluster.

Add Integrations

The next step towards building your pipeline is adding integrations to it. From the Pipelines tab, add your integration(s), ensure that the correct integration revision is selected, and click the Add Integration button. This will connect your source data to your integrations, which will contain various configurations according to your playbook settings. A pipeline must have at least one integration in order to successfully make use of any data that is sent to MetaRouter.

Pipeline Variables

If an integration you add to a Pipeline contains a Pipeline Variable, it will be available to be edited within the Pipeline card. Click the pencil icon next to the field name, add the value you’d like to be added for the selected pipeline, and click Save.

Pipeline variables make integration management easier by designating individual Parameters as variables at the pipeline stage. When a Parameter is marked as a Pipeline Variable, the Parameter can be changed on a per-pipeline basis within the pipelines tab. This means that you don’t need to create a new integration if only one or a few Parameters need to be changed within an integration to make it compatible with a set of other pipelines.

Deploying Your Pipeline

Once you have entered all required Pipeline configurations or have made a change to an existing pipeline, you will see the blue Deploy Pipeline button available on the pipeline card. Click this button, which will take you to the Change Verification panel.

Within the Change Verification panel, click the Show Details button next to an integration. Here, you can view the differences (diffs) between the integration versions you’ve added. If you’re building a pipeline for the first time, you won’t see any diffs. If you’ve changed integration revision versions, you will see a diff of the old version vs. the new version. Any fields removed or edited will appear in red, and updates will appear in green.

Once you have verified that the changes to your integrations are correct, in the bottom right of the panel, click the green Deploy button.

Verifying Your Pipeline Deployment

If your pipeline successfully deployed, the blue Deploy Pipeline will be grayed out and read “Deployed.” If the pipeline was not deployed, you should receive an error message. If you require help while deploying a pipeline, reach out to your Customer Success Manager or navigate to

Pipeline Health

The Pipeline Health indicator lets you know whether your Pipeline is in a deployable state or if there is an issue that needs to be corrected prior to the Pipeline being deployed. Health states include:

  • Healthy: The Pipeline is deployable.
  • Missing Variables: The Pipeline is missing one or more Pipeline Variables, which could result in missing parameters when events are sent to an integration.

Other errors you may encounter as you are deploying your pipeline include:

  • Unable to connect to cluster. This means that MetaRouter cannot access your cluster with the provided cluster URL or secret. Please reach out to for assistance.

Bulk Deployment Actions

Bulk deployment actions help your organization make changes to many pipelines at once. In order to access bulk deployment actions, in the top right corner of the Pipelines page, click the Bulk Stage & Deploy button. On the next page, you will see two options.

Bulk Stage & Deploy

This action will allow you to change many pipelines staged with a revision version to a new revision version. This is helpful if you’ve made a change to an integration and need to implement that change across many pipelines.

To begin, click the Bulk Stage & Deploy button, and in the upper right-hand corner, click the Next Step button.

On the next screen you will see the Revisions tab, where you can designate which Integration you want to bulk stage and deploy. You will then designate the revision version you’re moving from, and which version you’d like to deploy. Pipelines will only be altered if the Current Revision version is applied. You cannot move from revision versions that contain Pipeline Variables to versions that do not contain Pipeline Variables, or vice-versa.

In the second step, Pipelines, you will designate which pipelines you would like to apply the bulk action to. This will stage the integration changes to each pipeline selected.

After selecting your Pipelines, you will be taken to the final Confirm & Deploy step. Here, you can click the Deploy Diff button to see how each pipeline will change. Once you have verified your changes, click the Confirm & Deploy button to apply your pipeline changes. On the following screen, you will see which pipelines deployed successfully or encountered issues with the deployment.

Deploy Only

This action is helpful if you have already added (”staged”) integrations to many pipelines but have not yet deployed the pipelines. To Deploy Only, select this option within the Bulk Stage & Deploy menu, and click Next Step. On the next screen, select the Pipelines you’d like to deploy all staged integrations for. Again, click Next Step. Here, you can view the diffs for the integration changes. If all looks correct, click the Confirm and Deploy button. After deploying, you will see which pipelines deployed successfully or encountered issues with the deployment.

Pipeline Filters

Pipeline Filters let you allow or drop events from a pipeline if certain field conditions are met within the event. Filtering at the Pipeline level (as opposed to at the integration level) streamlines the implementation of certain filters that should be applied to an entire pipeline, as opposed to at the integration level. Filtering at the Pipeline level can also help reduce egress resources and cost for your organization (if you have deployed within your own cloud environment), as the data is filtered within each forwarder and never leaves the MetaRouter platform.

Supported Filter Use Cases

  • Filter on User Agent or IP Range parameters to limit bots
  • Filter on PII fields to prevent PII from reaching a destination
  • Filter on certain parameters to reduce event volume or avoid unwanted data reaching destinations.
  • Filter on custom fields that provide a key to filter on, e.g. a dedicated bot filtering service like PerimeterX.

Configuring Pipeline Filters

Pipeline Filters can only be applied by MetaRouter support on your behalf. Please reach out to your Customer Success Manager to discuss your Pipeline Filter use cases and have filters implemented. Alternatively, you can filter at the Integration level.

Using One AJS File for Many Pipelines


This feature is in Beta testing. Please reach out to the MetaRouter team prior to implementation.

It is possible to configure one analytics.js file and deploy it to multiple pipelines. This reduces the amount of file management required in scenarios where an organization has many pipelines configured, but the file configuration is the exact same with the exception of the writeKey and ingestion URL.

In order to implement a one-to-many relationship between files and pipelines, you must perform the following steps:

  1. Send a support ticket at or reach our to your Customer Success Manager to get started detailing your desire to use this setup. The MetaRouter team will enable this feature for your organization.
  2. Build your first Analytics.js file. The file will contain just one ingestion URL and one writeKey.
  3. During implementation, designate the writeKey and ingestion URL of the pipeline you’d like to use in the analytics.load function contained within the AJS snippet. The ingestion URL and writeKey included in the file itself will now be overwritten by the details included in the snippet.

Keep in mind that this feature requires that domains use the same configuration options. If different configuration options are required per domain within an organization (e.g. if the owners of one specific domain would like to test a sync), you cannot make those changes without propagating the changes to the rest of the pipelines within your org. In this situation, you would need to maintain separate files for each pipeline.