# Snowflake - Bulk Load from external stage action
Load a file from an external stage Amazon S3 bucket as an external source into a target table. This action uses the COPY command to load data directly from an external source to a target table.
This action will execute the load and wait for completion before moving onto the next step. Load time depends on the size of source file, number of columns, additional validation in the target table and network speed (faster if loading data from S3 to an AWS-deployed Snowflake instance). 1 GB CSV file with 30 columns and 3 million rows will take 60 seconds.
The source file can contain data in CSV format, JSON, PARQUET and other semi-structured file types.
# Load data from an Amazon S3 bucket to a table
Bulk Load from Amazon S3 action
# Input fields
|Table||Select a target table to load data into. Typically, this is a staging table for loading data. Subsequently, rows in this table is merged into your production table.|
Select an existing external stage that points to an Amazon S3 bucket. If a file is not specified in this stage, all new files will be loaded. This external stage contains information about file location, AWS credentials, encryption and file format details.
Learn how to create an S3 external stage.
# Output fields
|Bucket URL||Relative path and name of the source file.|
|Rows parsed||Number of rows read from the source file.|
|Rows loaded||Number of rows successfully loaded from the source file into target table.|
|Error limit||If the number of errors reaches this limit, then abort the load. This is typically 0, meaning that the load will abort on the first error.|
|Errors seen||Number of rows with error in the source file.|
|First error||Error details of the first error in the source file.|
|First error line||Line number of the first row that caused an error.|
|First error character||Position of the first character that caused an error.|
|First error column name||Column name where the first error occurred.|