Snowflake

Snowflake is a relational ANSI SQL data warehouse in the cloud. Due to its unique architecture designed for the cloud, Snowflake offers a data warehouse that is faster, easier to use, and far more flexible than traditional data warehouse

How to connect to Snowflake on Workato

The Snowflake connector uses username and password to authenticate with Snowflake. If your snowflake instance has network policies that restrict access based on IP address, you will need to whitelist Workato IP addresses to successfully create a connection.

Snowflake connection Snowflake connection

Field Description
Connection name Give this SQL Server connection a unique name that identifies which SQL Server instance it is connected to.
Account name

Account name of your Snowflake instance. Additional segments may be needed depending on the cloud platform (AWS or Azure) and the region where your Snowflake instance is hosted.

Account hosted in Full account name
AWS US west workatodemo
AWS US east workatodemo.us-east-1
AWS EU (Frankfurt) workatodemo.eu-central-1
AWS AP (Sydney) workatodemo.ap-southeast-2
Azure West Europe workatodemo.west-europe.azure
Find out more about Snowflake account naming.
Warehouse Name of the warehouse to use for performing all compute for this connection. See Warehouse considerations for more information.
Database Name of the Snowflake database you wish to connect to.
Username Username to connect to Snowflake.
Password Password to connect to Snowflake.
Schema Optional. Name of the schema within the Snowflake database you wish to connect to. Defaults to public.

Working with the Snowflake connector

Warehouse considerations

Snowflake utilizes per-second billing for all compute (loading, transforming and query). Here are some things to consider when setting up a warehouse for a Snowflake connection. There is a 60-second minimum each time a warehouse starts, so there is no advantage of suspending a warehouse within the first 60 seconds of resuming

When choosing the following warehouse properties, consider the frequency and time between queries, the number of concurrent active recipes and complexity of each query.

Warehouse size

For most use cases, X-Small warehouse is sufficient. A larger warehouse has more servers and does more work proportional to the per-second cost. This means that a larger warehouse will complete a query faster while consuming the same number of credits.

If your use case involves long and complex queries it is recommended to use a larger warehouse to prevent timeouts.

Multi-cluster warehouses

A multi-cluster warehouse with auto-scale enabled is able to create multiple warehouses (of the same size) to meet temporary load fluctuation.

Use a multi-cluster warehouse if you expect concurrent jobs or jobs with large queries. Learn more about multi-cluster warehouses.

Auto-suspend and auto-resume

Warehouses used in the connection must have auto-resume enabled. Otherwise, recipe jobs will fail when trying to run on a suspended warehouse.

Warehouses can be configured to auto-suspend after a period of inactivity. This specified period of time depends on your business process.

A running warehouse maintains a cache of table data. This reduces the time taken for subsequent queries if the cached data can be used instead of reading from the table again. A larger warehouse has a larger cache capacity. This cache is dropped when the warehouse is suspended. As a result, performance for initial queries on a warehouse that was auto-resumed will be slower.

Use cases with high frequency and low down time in between queries will benefit from a longer period of time before auto-suspend.

Table and view

The Snowflake connector works with all tables and views available to the username used to establish the connection. These are available in pick lists in each trigger/action, or you can provide the exact name.

Table selection from pick list Select a table/view from pick list

Exact table name provided Provide exact table/view name in a text field

WHERE condition

This input field is used to filter and identify rows to perform an action on. It is used in multiple triggers and actions in the following ways:

  • filter rows to be picked up in triggers
  • filter rows in Select rows action
  • filter rows to be deleted in Delete rows action

This clause will be used as a WHERE statement in each request. This should follow basic SQL syntax.

Simple statements

String values must be enclosed in single quotes ('') and columns used must exist in the table/view.

A simple WHERE condition to filter rows based on values in a single column looks like this.

currency = 'USD'

If used in a Select rows action, this WHERE condition will return all rows that has the value 'USD' in the currency column. Just remember to wrap datapills with single quotes in your inputs.

Using datapills in WHERE condition Using datapills wrapped with single quotes in WHERE condition

Column identifiers with spaces must be enclosed in double quotes (""). For example, currency code must to enclosed in brackets to be used as an identifier.

"currency code" = 'USD'

WHERE condition with enclosed identifier WHERE condition with column identifier enclosed with double quotes

Complex statements

Your WHERE condition can also contain subqueries. The example below selects inactive users from the DISTRIBUTORS table.

ID IN(SELECT DISTRIBUTOR_ID FROM USERS WHERE ACTIVE = FALSE)

When used in a Select rows action, this will select all rows in the DISTRIBUTORS table related to rows in the USERS table that are not active (ACTIVE = FALSE).

Using subquery in WHERE condition Using subquery in WHERE condition

results matching ""

    No results matching ""