AWS Step Features is a completely managed visible workflow service that lets you construct complicated knowledge processing pipelines involving a various set of extract, rework, and cargo (ETL) applied sciences comparable to AWS Glue, Amazon EMR, and Amazon Redshift. You’ll be able to visually construct the workflow by wiring particular person knowledge pipeline duties and configuring payloads, retries, and error dealing with with minimal code.
Whereas Step Features helps computerized retries and error dealing with when knowledge pipeline duties fail on account of momentary or transient errors, there might be everlasting failures comparable to incorrect permissions, invalid knowledge, and enterprise logic failure through the pipeline run. This requires you to establish the difficulty within the step, repair the difficulty and restart the workflow. Beforehand, to rerun the failed step, you wanted to restart the complete workflow from the very starting. This results in delays in finishing the workflow, particularly if it’s a fancy, long-running ETL pipeline. If the pipeline has many steps utilizing map and parallel states, this additionally results in elevated value on account of will increase within the state transition for working the pipeline from the start.
Step Features now helps the flexibility so that you can redrive your workflow from a failed, aborted, or timed-out state so you possibly can full workflows quicker and at a decrease value, and spend extra time delivering enterprise worth. Now you possibly can get well from unhandled failures quicker by redriving failed workflow runs, after downstream points are resolved, utilizing the identical enter supplied to the failed state.
On this submit, we present you an ETL pipeline job that exports knowledge from Amazon Relational Database Service (Amazon RDS) tables utilizing the Step Features distributed map state. Then we simulate a failure and reveal use the brand new redrive characteristic to restart the failed job from the purpose of failure.
Answer overview
One of many frequent functionalities concerned in knowledge pipelines is extracting knowledge from a number of knowledge sources and exporting it to an information lake or synchronizing the information to a different database. You need to use the Step Features distributed map state to run tons of of such export or synchronization jobs in parallel. Distributed map can learn thousands and thousands of objects from Amazon Easy Storage Service (Amazon S3) or thousands and thousands of data from a single S3 object, and distribute the data to downstream steps. Step Features runs the steps throughout the distributed map as youngster workflows at a most parallelism of 10,000. A concurrency of 10,000 is properly above the concurrency supported by many different AWS companies comparable to AWS Glue, which has a comfortable restrict of 1,000 job runs per job.
The pattern knowledge pipeline sources product catalog knowledge from Amazon DynamoDB and buyer order knowledge from Amazon RDS for PostgreSQL database. The info is then cleansed, remodeled, and uploaded to Amazon S3 for additional processing. The info pipeline begins with an AWS Glue crawler to create the Knowledge Catalog for the RDS database. As a result of beginning an AWS Glue crawler is asynchronous, the pipeline has a wait loop to verify if the crawler is full. After the AWS Glue crawler is full, the pipeline extracts knowledge from the DynamoDB desk and RDS tables. As a result of these two steps are unbiased, they’re run as parallel steps: one utilizing an AWS Lambda operate to export, rework, and cargo the information from DynamoDB to an S3 bucket, and the opposite utilizing a distributed map with AWS Glue job sync integration to do the identical from the RDS tables to an S3 bucket. Observe that AWS Identification and Entry Administration (IAM) permissions are required for invoking an AWS Glue job from Step Features. For extra info, consult with IAM Insurance policies for invoking AWS Glue job from Step Features.
The next diagram illustrates the Step Features workflow.
There are a number of tables associated to prospects and order knowledge within the RDS database. Amazon S3 hosts the metadata of all of the tables as a .csv file. The pipeline makes use of the Step Features distributed map to learn the desk metadata from Amazon S3, iterate on each single merchandise, and name the downstream AWS Glue job in parallel to export the information. See the next code:
Stipulations
To deploy the answer, you want the next conditions:
Launch the CloudFormation template
Full the next steps to deploy the answer assets utilizing AWS CloudFormation:
- Select Launch Stack to launch the CloudFormation stack:
- Enter a stack identify.
- Choose all of the verify packing containers beneath Capabilities and transforms.
- Select Create stack.
The CloudFormation template creates many assets, together with the next:
- The info pipeline described earlier as a Step Features workflow
- An S3 bucket to retailer the exported knowledge and the metadata of the tables in Amazon RDS
- A product catalog desk in DynamoDB
- An RDS for PostgreSQL database occasion with pre-loaded tables
- An AWS Glue crawler that crawls the RDS desk and creates an AWS Glue Knowledge Catalog
- A parameterized AWS Glue job to export knowledge from the RDS desk to an S3 bucket
- A Lambda operate to export knowledge from DynamoDB to an S3 bucket
Simulate the failure
Full the next steps to check the answer:
- On the Step Features console, select State machines within the navigation pane.
- Select the workflow named
ETL_Process
. - Run the workflow with default enter.
Inside just a few seconds, the workflow fails on the distributed map state.
You’ll be able to examine the map run errors by accessing the Step Features workflow execution occasions for map runs and youngster workflows. On this instance, you possibly can id the exception is because of Glue.ConcurrentRunsExceededException from AWS Glue. The error signifies there are extra concurrent requests to run an AWS Glue job than are configured. Distributed map reads the desk metadata from Amazon S3 and invokes as many AWS Glue jobs because the variety of rows within the .csv file, however AWS Glue job is about with the concurrency of three when it’s created. This resulted within the youngster workflow failure, cascading the failure to the distributed map state after which the parallel state. The opposite step within the parallel state to fetch the DynamoDB desk ran efficiently. If any step within the parallel state fails, the entire state fails, as seen with the cascading failure.
Deal with failures with distributed map
By default, when a state studies an error, Step Features causes the workflow to fail. There are a number of methods you possibly can deal with this failure with distributed map state:
- Step Features lets you catch errors, retry errors, and fail again to a different state to deal with errors gracefully. See the next code:
- Typically, companies can tolerate failures. That is very true if you end up processing thousands and thousands of things and also you anticipate knowledge high quality points within the dataset. By default, when an iteration of map state fails, all different iterations are aborted. With distributed map, you possibly can specify the utmost variety of, or share of, failed gadgets as a failure threshold. If the failure is throughout the tolerable degree, the distributed map doesn’t fail.
- The distributed map state permits you to management the concurrency of the kid workflows. You’ll be able to set the concurrency to map it to the AWS Glue job concurrency. Bear in mind, this concurrency is relevant solely on the workflow execution degree—not throughout workflow executions.
- You’ll be able to redrive the failed state from the purpose of failure after fixing the basis explanation for the error.
Redrive the failed state
The foundation explanation for the difficulty within the pattern answer is the AWS Glue job concurrency. To handle this by redriving the failed state, full the next steps:
- On the AWS Glue console, navigate to the job named
ExportsTableData
. - On the Job particulars tab, beneath Superior properties, replace Most concurrency to five.
With the launch of redrive characteristic, You need to use redrive to restart executions of commonplace workflows that didn’t full efficiently within the final 14 days. These embody failed, aborted, or timed-out runs. You’ll be able to solely redrive a failed workflow from the step the place it failed utilizing the identical enter because the final non-successful state. You’ll be able to’t redrive a failed workflow utilizing a state machine definition that’s totally different from the preliminary workflow execution. After the failed state is redriven efficiently, Step Features runs all of the downstream duties robotically. To study extra about how distributed map redrive works, consult with Redriving Map Runs.
As a result of the distributed map runs the steps contained in the map as youngster workflows, the workflow IAM execution function wants permission to redrive the map run to restart the distributed map state:
You’ll be able to redrive a workflow from its failed step programmatically, by way of the AWS Command Line Interface (AWS CLI) or AWS SDK, or utilizing the Step Features console, which offers a visible operator expertise.
- On the Step Features console, navigate to the failed workflow you need to redrive.
- On the Particulars tab, select Redrive from failure.
The pipeline now runs efficiently as a result of there’s sufficient concurrency to run the AWS Glue jobs.
To redrive a workflow programmatically from its level of failure, name the new Redrive Execution API motion. The identical workflow begins from the final non-successful state and makes use of the identical enter because the final non-successful state from the preliminary failed workflow. The state to redrive from the workflow definition and the earlier enter are immutable.
Observe the next concerning various kinds of youngster workflows:
- Redrive for specific youngster workflows – For failed youngster workflows which can be specific workflows inside a distributed map, the redrive functionality ensures a seamless restart from the start of the kid workflow. This lets you resolve points which can be particular to particular person iterations with out restarting the complete map.
- Redrive for normal youngster workflows – For failed youngster workflows inside a distributed map which can be commonplace workflows, the redrive characteristic features the identical approach as with standalone commonplace workflows. You’ll be able to restart the failed state inside every map iteration from its level of failure, skipping pointless steps which have already efficiently run.
You need to use Step Features standing change notifications with Amazon EventBridge for failure notifications comparable to sending an e mail on failure.
Clear up
To scrub up your assets, delete the CloudFormation stack by way of the AWS CloudFormation console.
Conclusion
On this submit, we confirmed you use the Step Features redrive characteristic to redrive a failed step inside a distributed map by restarting the failed step from the purpose of failure. The distributed map state permits you to write workflows that coordinate large-scale parallel workloads inside your serverless functions. Step Features runs the steps throughout the distributed map as youngster workflows at a most parallelism of 10,000, which is properly above the concurrency supported by many AWS companies.
To study extra about distributed map, consult with Step Features – Distributed Map. To study extra about redriving workflows, consult with Redriving executions.
Concerning the Authors
Sriharsh Adari is a Senior Options Architect at Amazon Net Providers (AWS), the place he helps prospects work backwards from enterprise outcomes to develop progressive options on AWS. Over time, he has helped a number of prospects on knowledge platform transformations throughout business verticals. His core space of experience embody Know-how Technique, Knowledge Analytics, and Knowledge Science. In his spare time, he enjoys taking part in Tennis.
Joe Morotti is a Senior Options Architect at Amazon Net Providers (AWS), working with Enterprise prospects throughout the Midwest US to develop progressive options on AWS. He has held a variety of technical roles and enjoys exhibiting prospects the artwork of the doable. He has attained seven AWS certification and has a ardour for AI/ML and the contact heart house. In his free time, he enjoys spending high quality time along with his household exploring new locations and overanalyzing his sports activities group’s efficiency.
Uma Ramadoss is a specialist Options Architect at Amazon Net Providers, centered on the Serverless platform. She is accountable for serving to prospects design and function event-driven cloud-native functions and fashionable enterprise workflows utilizing companies like Lambda, EventBridge, Step Features, and Amazon MWAA.