# Snowflake
SUMMARY
- To connect to Snowflake on Workato, you can use OAuth 2.0 or key-pair authentication.
- The Snowflake connector supports operations such as SELECT, INSERT, UPDATE, and DELETE, among others.
- Create custom roles to restrict Workato's access to only the Snowflake objects you want to build recipes with.
- You can use parameters in conjunction with
WHERE
conditions for security against SQL injection.
Snowflake is a relational ANSI SQL data warehouse built for the cloud. Its unique architecture enables faster performance, ease of use, and greater flexibility than traditional data warehouses.
# Use cases
Integrate Snowflake with your existing apps to build powerful data workflows. You can automate data validation, improve lead tracking, and enhance decision-making with the Snowflake connector. Explore our use cases to discover how you can streamline your Snowflake processes:
- Validate Salesforce contact information with JavaScript and upsert the data to Snowflake.
- Create Salesforce tasks and Snowflake rows from Marketo lead activity.
# Supported authentication methods
Workato supports the following authentication methods for Snowflake connections:
# Configure OAuth 2.0 authentication
You must create a custom integration in Snowflake to generate a client ID and secret to use OAuth 2.0 for a Snowflake connection.
Create a custom integration
Run the following SQL command in Snowflake to register a custom OAuth integration:
CREATE SECURITY INTEGRATION WORKATO_OAUTH
TYPE = OAUTH
ENABLED = TRUE
OAUTH_CLIENT = CUSTOM
OAUTH_CLIENT_TYPE = CONFIDENTIAL
OAUTH_REDIRECT_URI = 'https://www.workato.com/oauth/callback'
OAUTH_ISSUE_REFRESH_TOKENS = TRUE;
REQUIRED PRIVILEGE
Use the ACCOUNTADMIN
role or a role with the global CREATE INTEGRATION
privilege to run this command.
Retrieve the client ID and secret
Run the following command to retrieve your client credentials:
SELECT SYSTEM$SHOW_OAUTH_CLIENT_SECRETS('WORKATO_OAUTH');
Refer to the Snowflake documentation (opens new window) for more information.
Complete the connection setup in Workato. Use the Client ID and Client secret values you retrieved from Snowflake.
# Configure private key authentication
Use private key authentication to connect to Snowflake without a password. This method uses a client-generated RSA key pair and provides strong security. You must use a command-line tool to generate your key pair.
Complete the following steps to configure your Snowflake user and connect it to Workato:
Open Terminal, iTerm2, or another CLI (Command Line Interface) tool of your choice.
Generate a private key
Run one of the following commands to generate an unencrypted private key:
openssl genrsa -out rsa_key.pem 2048
or
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
Run the following command to generate an encrypted private key using PKCS#12 encryption:
openssl genrsa 2048 | openssl pkcs8 -topk8 -v1 <ALGORITHM> -inform PEM -out rsa_key.p8
You can use the following algorithms with the -v1
option:
PBE-SHA1-RC2-40
PBE-SHA1-RC4-40
PBE-SHA1-RC2-128
PBE-SHA1-RC4-128
PBE-SHA1-3DES
PBE-SHA1-2DES
Run the following command to use stronger encryption algorithms:
openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 <ALGORITHM> -inform PEM -out rsa_key.p8
You can use the following supported -v2
algorithms:
AES128
AES256
DES3
Generate a public key
Run the following command to create a public key from your private key:
openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
Assign the public key to your Snowflake user
Run the following SQL command in a Snowflake worksheet:
ALTER USER <USERNAME> SET RSA_PUBLIC_KEY='<PUBLIC_KEY>';
This assigns the public key to your Snowflake user. Replace <USERNAME>
with your Snowflake user and <PUBLIC_KEY>
with the key content.
ROLE REQUIREMENT
You must have the SYSADMIN
or SECURITYADMIN
role to execute this command.
Complete the connection setup in Workato. Use the Snowflake User name, the Private key in PKCS#8 format, and the Private key passphrase if the key is encrypted.
# Configure Username/Password authentication
Workato doesn't recommend this method. Snowflake plans to remove support for single-factor password authentication in November 2025. Use this method only to maintain existing connections.
# Connect to Snowflake on Workato
SNOWFLAKE USERNAME/PASSWORD DEPRECATION
Snowflake plans to deprecate single-factor password authentication for users by November 2025. Refer to Snowflake's official deprecation announcement (opens new window) for more information.
We strongly encourage you to migrate all existing Username/Password connections to OAuth 2.0 or Key-pair authentication before this date. Existing Username/Password connections will remain operational until the deprecation date.
The Snowflake connector supports OAuth 2.0, private key, and username/password authentication. If your Snowflake instance has network policies that restrict access based on IP address, you must add Workato IP addresses to the allowlist to successfully create a connection.
Complete the following steps to connect to Snowflake:
Select Create > Connection.
Search for and select Snowflake
on the New connection page.
Provide a name for your connection in the Connection name field.
Snowflake connection
Use the Location drop-down menu to select the project where you plan to store the connection.
Enter the Account identifier (opens new window) of your Snowflake instance in one of the supported formats:
- Account name:
https://{orgname}-{account_name}
- Connection name:
https://{orgname}-{connectionname}
- Account locator:
https://{accountlocator}.{region}.{cloud}
Refer to the Snowflake Connecting to your accounts guide (opens new window) for more information.
ACCOUNT LOCATOR FORMAT
Certain locations require you to include the {region}
and {cloud}
in your account locator URL. For example:
- AWS US West (Oregon):
your-account-locator
- AWS US East (Ohio):
your-account-locator.us-east-2
- Azure West Europe:
your-account-locator.west-europe.azure
Refer to the Using an account locator as an identifier (opens new window) guide for more information.
Enter the Warehouse name to define the compute resources for this connection. Refer to the Warehouse considerations section for more information.
Enter the Database name for the target Snowflake database.
Select an Authentication type for your Snowflake connection. Refer to the Snowflake connector authentication options section for configuration steps.
Optional. Specify a Role for authentication. This role must be an existing role assigned to the user. If left blank, Snowflake uses the default role assigned to the user.
Optional. Enter the Schema. If left blank, the default schema is public.
Optional. Set the Use improved datetime handling (Recommended) to Yes to ensure correct timezone handling for timestamps.
Optional. Define the Database timezone to apply to timestamps without an assigned timezone.
Click Connect to verify the connection.
# Working with the Snowflake connector
# Warehouse considerations
Snowflake utilizes per-second billing for all compute (loading, transforming, and query). Here are some things to consider when setting up a warehouse for a Snowflake connection. There is a 60-second minimum each time a warehouse starts, so there is no advantage of suspending a warehouse within the first 60 seconds of resuming
When choosing the following warehouse properties, consider the frequency and time between queries, the number of concurrent active recipes and complexity of each query.
# Warehouse size
For most use cases, X-Small warehouse is sufficient. A larger warehouse has more servers and does more work proportional to the per-second cost. This means that a larger warehouse will complete a query faster while consuming the same number of credits.
If your use case involves long and complex queries it is recommended to use a larger warehouse to prevent timeouts.
# Multi-cluster warehouses
A multi-cluster warehouse with auto-scale enabled is able to create multiple warehouses (of the same size) to meet temporary load fluctuation.
Use a multi-cluster warehouse if you expect concurrent jobs or jobs with large queries. Learn more about multi-cluster warehouses (opens new window).
# Auto-suspend and auto-resume
Warehouses used in the connection must have auto-resume enabled. Otherwise, recipe jobs will fail when trying to run on a suspended warehouse.
Warehouses can be configured to auto-suspend after a period of inactivity. This specified period of time depends on your business process.
A running warehouse maintains a cache of table data. This reduces the time taken for subsequent queries if the cached data can be used instead of reading from the table again. A larger warehouse has a larger cache capacity. This cache is dropped when the warehouse is suspended. As a result, performance for initial queries on a warehouse that was auto-resumed will be slower.
Use cases with high frequency and low down time in between queries will benefit from a longer period of time before auto-suspend.
# Database timezone
Snowflake supports TIMESTAMP_NTZ data type ("wallclock" time information without timezone). This creates a challenge trying to integrate with external systems. When sending or reading data from these columns, it needs to be converted to a default timezone for APIs of other applications to process accurately.
Select a timezone that your database operates in. This timezone will only be applied to these columns. Other timestamp columns with explicit timezone values will be unaffected.
If left blank, your Workato account timezone will be used instead.
When writing timestamp values to these columns in a table, they are first converted to the specified timezone (DST offset is applied when applicable). Then, only the "wallclock" value is stored.
When reading timestamp without timezone values in a table, the timestamp is assigned the selected timezone and processed as a timestamp with timezone value. DST offset will be applied whenever applicable.
# Table and view
The Snowflake connector works with all tables and views available to the username used to establish the connection. These are available in pick lists in each trigger/action, or you can provide the exact name.
Select a table/view from pick list
Provide exact table/view name in a text field
# Single row vs batch of rows
Snowflake connector can read or write to your database either as a single row or in batches. When using batch triggers/actions, you have to provide the batch size you wish to work with. The batch size can be any number between 1 and 100, with 100 being the maximum batch size.
Besides the difference in input fields, there is also a difference between the outputs of these 2 types of operations. A batch trigger returns a list datapill (i.e. an array of rows).
Batch trigger output
List datapill
The Rows datapill indicates that the output is a list containing data for each row in that batch.
However, a trigger that processes rows one at a time will have an output datatree that allows you to map data from that single row.
Single row output
As a result, the output of batch triggers/actions needs to be handled differently. The output of the trigger can be used in actions with batch operations (like the Salesforce Create objects in bulk action) that requires mapping the Rows datapill into the source list. Learn how to work with lists in Workato.
Using batch trigger output
# WHERE condition
This input field is used to filter and identify rows to perform an action on. It is used in multiple triggers and actions in the following ways:
- filter rows to be picked up in triggers
- filter rows in Select rows action
- filter rows to be deleted in Delete rows action
TIP
Examples below showcase how to use WHERE conditions directly with user input. For added security, use WHERE conditions with parameters to prevent SQL injection. Learn more
This clause will be used as a WHERE
statement in each request. This should follow basic SQL syntax.
# Simple statements
String values must be enclosed in single quotes (''
) and columns used must exist in the table/view.
A simple WHERE
condition to filter rows based on values in a single column looks like this.
email = '[email protected]'
If used in a Select rows action, this WHERE
condition will return all rows that has the value '[email protected]' in the email
column. Just remember to wrap datapills with single quotes in your inputs.
Using datapills wrapped with single quotes in
WHERE
condition
If your columns contains spaces, remember to used the column identifier - double quotes (""
). For example, currency code must to enclosed in brackets to be used as an identifier.
"email address" = '[email protected]'
WHERE
condition with column identifier enclosed with double quotes
# Complex statements
Your WHERE
condition can also contain subqueries. The example below selects active users from the USERS
table.
ID IN(SELECT USER_ID FROM USERS WHERE ACTIVE = TRUE)
When used in a Select rows action, this will select all rows from your main query that are related to rows in the USERS
table that are active (ACTIVE = TRUE
).
Using subquery in WHERE condition
# Using Parameters
Parameters are used in conjunction with WHERE
conditions to add an additional layer of security against SQL injection. To use parameters in your WHERE
conditions, you will first need to declare bind variables in your input. Bind parameters must be declared in the form :bind_variable
where the variable name is preceded with a :
. After this is done, declare the parameter in the section directly below using the exact name you have given.
TIP
Bind variables should only be used in place of column values and not column names.
WHERE
condition with bind variableYou can provide as many bind variables as you'd like and each one should have a unique name. We distinguish bind variables from column names and static values by ignoring any :
inside single quotes ('
), double quotes ("
) and square brackets ([]
).
# Additional Notes
# Handling special characters
When column names are defined with special characters, datapills for such columns are not mapped correctly to the original column and will return null
. The most commonly used special characters are whitespace
and -
. Examples of such definitions are "column name" and "column-name".
There are several workarounds available for such situations.
# Workaround 1
Use an appropriate alias in the SQL Statement. Datapills will then be mapped correctly to the alias.
SELECT ID, NAME
"whitespace column" as whitespace_column,
"dashed-column" as dashed_column
FROM CUSTOMERS
# Workaround 2
Access the value directly using the row index and column name.
Accessing data via row index and column name
Last updated: 5/6/2025, 1:00:17 PM