Azure Synapse Analytics

Learn how to connect Azure Synapse Analytics as a source

GrowthLoop connects directly to Azure Synapse Analytics so you can leverage the customer data in your dedicated or serverless SQL pool to build audiences, journeys, and signal exports in minutes.

To establish the connection, you will walk through the following steps:

  1. Provision an Azure AD service principal that GrowthLoop can use to read from Synapse and write unload files to your Azure storage account.
  2. Create the storage account and container that will receive native CREATE EXTERNAL TABLE AS SELECT (CETAS) unload files.
  3. Set up the Synapse-side objects required for CETAS: a master key, a database scoped credential, an external data source, and an external file format.
  4. Allowlist GrowthLoop's egress IPs on the Synapse server firewall.
  5. Connect the warehouse via the GrowthLoop UI.

1. Create an Azure AD service principal

In the Azure portal, navigate to Microsoft Entra ID β†’ App registrations, click New registration, and create an application named something like growthloop-synapse. Once created, capture three values from the Overview tab:

  • Tenant ID
  • Application (client) ID
  • A new client secret (created under Certificates & secrets). Save the secret value immediately β€” Azure will only show it once.

These three values authenticate GrowthLoop to Azure for both the Synapse connection and (optionally) the Azure storage account.

2. Provision the unload storage container

By default GrowthLoop unloads to a GrowthLoop-managed storage account. If you prefer to keep unload files inside your own subscription (BYOB), create the following:

  • A storage account (Standard, LRS or ZRS is fine) with hierarchical namespace enabled (ADLS Gen2).
  • A container dedicated to GrowthLoop exports, e.g. growthloop-exports.

Grant the service principal the Storage Blob Data Contributor role on the container so it can read/write blob data.

3. Synapse external objects

Connect to your Synapse SQL pool as an admin and run the following DDL once per database. Replace bracketed values to match your environment.

-- One-time per database: master key for database scoped credentials
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '[strong-random-password]';

-- Credential that Synapse uses to talk to the storage account
CREATE DATABASE SCOPED CREDENTIAL GrowthLoopAzureCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
     SECRET   = '[sas-token-without-leading-question-mark]';

-- External data source pointing at the unload container
CREATE EXTERNAL DATA SOURCE OutGoingProdSource
WITH (
    LOCATION    = 'abfss://growthloop-exports@[storage-account].dfs.core.windows.net',
    CREDENTIAL  = GrowthLoopAzureCredential,
    TYPE        = HADOOP
);

-- File format for the parquet payloads GrowthLoop unloads
CREATE EXTERNAL FILE FORMAT ParquetFF
WITH (
    FORMAT_TYPE      = PARQUET,
    DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
);

GrowthLoop's Synapse driver writes CETAS statements that reference DATA_SOURCE = [OutGoingProdSource] and FILE_FORMAT = [ParquetFF] by default. If you prefer different names, configure them on the source connection settings page in the GrowthLoop app once the connection has been created.

4. Allowlist GrowthLoop's egress IPs

In the Azure portal, open your Synapse workspace, go to Networking β†’ Firewall rules, and add an inbound rule for each GrowthLoop egress IP that your Solutions Architect provided. Save the changes β€” Synapse only enforces firewall rules on the SQL endpoint, not on the storage account, so storage is unaffected.

5. Connect Synapse in the GrowthLoop app

In the GrowthLoop app, choose Connect a data source β†’ Azure Synapse. Fill in the form with:

  • Host: e.g. [workspace].sql.azuresynapse.net (dedicated) or [workspace]-ondemand.sql.azuresynapse.net (serverless)
  • Port: 1433
  • Database: the SQL pool name
  • Username / Password: either SQL auth or, for service principal auth, use the application (client) ID + client secret

Click Verify to confirm GrowthLoop can connect, then save.

If you want GrowthLoop to write unload files to your own Azure storage account, open the source connection settings page after creation and switch the storage platform to Azure Blob Storage. Provide the storage account, container, and either the service principal credentials, a SAS token, or a full connection string.

Troubleshooting

  • Login failed for user '<token-identified principal>' β€” confirm your service principal was added as a SQL user in the database with at minimum db_datareader and (for snapshot/journey state tables) write permission on the GrowthLoop schema.
  • External file access failed β€” the database scoped credential or service principal does not have Storage Blob Data Contributor on the container, or the firewall on the storage account is blocking GrowthLoop's egress IPs.

What’s Next