# Amazon S3 trigger - New CSV file
Triggers when a CSV file is added in a selected bucket/folder in Amazon S3.
Checks selected folder for new or updated CSV file once every poll interval. The output includes the file’s metadata and file contents, which are CSV rows delivered in batches.
Note that in Amazon S3, when a file is renamed, it is considered a new file. When a file is uploaded and overwrites an existing file with the same name, it is considered an updated file but not a new file.
Amazon S3 - New CSV file trigger
# Input fields
|When first started, this recipe should pick up events from||When recipe starts for the first time, it will pick up CSV files created from this specified time. Once recipe has been run or tested, value cannot be changed. Learn more about this field here.|
|Region||The region of the bucket to monitor for new/updated file, e.g. us-west-2. In Amazon S3, go to Bucket > Properties > Static website hosting to find your region in the Endpoint URL.|
|Bucket||The bucket to monitor for new CSV file. Select a bucket from the picklist or enter the bucket name directly.|
|Column separator||Delimiter separating the columns in the CSV file.|
|Folder path||The folder to monitor for new CSV files. Define full path (e.g. folder 1/subfolder 1). Sub-folders will not be monitored. Root folder or restricted folder is used by default.|
|Include files not ending with .csv?||Handle the cases when your CSV files exported from other systems may not have |
|Column names||The column names of the CSV file. You can manually define the column names with each column header per line.|
|Batch size||Workato divides the CSV file into smaller batches to process more efficiently. This field defines the number of CSV rows to process in each batch (Maximum of 1000 rows/batch). Use a larger batch size to increase data throughput. In some cases, Workato will automatically reduce batch size to avoid exceeding API limit. Learn more about Batch Processing.|
|Skip header row?||Select Yes if CSV file contains header row. Workato will not process that row as data.|
This trigger supports Trigger Condition, which allows you to filter trigger events.
# Output fields
|File||Object name||Full name of the file.|
|Last modified||Last modified timestamp of the file.|
|E tag||The hash of the file object, generated by Amazon S3.|
|Size||The file size in bytes.|
|Storage class||Storage class of this file object. Usually S3 Standard.|
|Rows||Line||The number of this CSV row.|
|Columns||Contains all column values in this CSV row. You can use the nested datapills to map each column values.|
|List size||Number of rows in this CSV rows list.|