Amazon Redshift is a fast and fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools.
How to connect to Redshift on Workato
The Redshift connector uses basic authentication to authenticate with Redshift.
|Connection name||Give this Redshift connection a unique name that identifies which Redshift instance it is connected to.|
Full JDBC URL of your Redshift instance. Example:
|Username||Username to connect to Redshift.|
|Password||Password to connect to Redshift.|
|Schema||Name of the schema within the Redshift database you wish to connect to. Defaults to public.|
Working with the Redshift connector
Table and view
The Redshift connector works with all tables and views. These are available in pick lists in each trigger/action or you can provide the exact name.
Select a table/view from pick list
Provide exact table/view name in a text field
In Redshift, unquoted identifiers are case-insensitive. Thus,
SELECT ID FROM USERS
is equivalent to
SELECT ID FROM users
However, quoted identifiers are case-sensitive. Hence,
SELECT ID FROM "USERS"
is NOT equivalent to
SELECT ID FROM "users"
Single row vs batch of rows
Redshift connector can read or write to your database either as a single row or in batches. When using batch triggers/actions, you have to provide the batch size you wish to work with. The batch size can be any number between 1 and 100, with 100 being the maximum batch size.
Batch trigger inputs
Besides the difference in input fields, there is also a difference between the outputs of these 2 types of operations. A trigger that processes rows one at a time will have an output datatree that allows you to map data from that single row.
Single row output
However, a trigger that processes rows in batches will output them as an array of rows. The Rows datapill indicates that the output is a list containing data for each row in that batch.
Batch trigger output
As a result, the output of batch triggers/actions needs to be handled differently. This recipe uses a batch trigger for new rows in the
users table. The output of the trigger is used in a Salesforce bulk upsert action that requires mapping the Rows datapill into the source list.
Using batch trigger output
This input field is used to filter and identify rows to perform an action on. It is used in multiple triggers and actions in the following ways:
- filter rows to be picked up in triggers
- filter rows in Select rows action
- filter rows to be deleted in Delete rows action
This clause will be used as a
WHERE statement in each request. This should follow basic SQL syntax. Refer to this Redshift documentation for a full list of rules for writing Redshift SQL statements.
Greater than or equal to
Less than or equal to
|IN(...)||List of values||
|LIKE||Pattern matching with wildcard characters (
IS NOT NULL
NULL values check
Non-NULL values check
String values must be enclosed in single quotes (
'') and columns used must exist in the table.
WHERE condition to filter rows based on values in a single column looks like this.
currency = 'USD'
If used in a Select rows action, this
WHERE condition will return all rows that have the value 'USD' in the
currency column. Just remember to wrap datapills with single quotes in your inputs.
Using datapills in
Column names with spaces must be enclosed in double quotes (
""). For example, currency code must to enclosed in brackets to be used as an identifier. Note that all quoted identifiers are case-sensitive.
"currency code" = 'USD'
In a recipe, remember to use the appropriate quotes for each value/identifier.
WHERE condition with enclosed identifier
WHERE condition can also contain subqueries. The following query can be used on the
id in (select user_id from tickets where priority = 2)
When used in a Delete rows action, this will delete all rows in the
users table where at least one associated row in the
tickets table has a value of 2 in the
Using datapills in
WHERE condition with subquery