dbt is an open supply, SQL-first templating engine that permits you to write repeatable and extensible knowledge transforms in Python and SQL. dbt focuses on the rework layer of extract, load, rework (ELT) or extract, rework, load (ETL) processes throughout knowledge warehouses and databases by way of particular engine adapters to realize extract and cargo performance. It permits knowledge engineers, knowledge scientists, and analytics engineers to outline the enterprise logic with SQL choose statements and eliminates the necessity to write boilerplate knowledge manipulation language (DML) and knowledge definition language (DDL) expressions. dbt lets knowledge engineers shortly and collaboratively deploy analytics code following software program engineering greatest practices like modularity, portability, steady integration and steady supply (CI/CD), and documentation.
dbt is predominantly utilized by knowledge warehouses (comparable to Amazon Redshift) prospects who want to hold their knowledge rework logic separate from storage and engine. We have now seen a robust buyer demand to broaden its scope to cloud-based knowledge lakes as a result of knowledge lakes are more and more the enterprise answer for large-scale knowledge initiatives because of their energy and capabilities.
In 2022, AWS printed a dbt adapter referred to as dbt-glue—the open supply, battle-tested dbt AWS Glue adapter that enables knowledge engineers to make use of dbt for cloud-based knowledge lakes together with knowledge warehouses and databases, paying for simply the compute they want. The dbt-glue adapter democratized entry for dbt customers to knowledge lakes, and enabled many customers to effortlessly run their transformation workloads on the cloud with the serverless knowledge integration functionality of AWS Glue. From the launch of the adapter, AWS has continued investing into dbt-glue to cowl extra necessities.
At this time, we’re happy to announce that the dbt-glue adapter is now a trusted adapter based mostly on our strategic collaboration with dbt Labs. Trusted adapters are adapters not maintained by dbt Labs, however adaptors that that dbt Lab is snug recommending to customers to be used in manufacturing.
The important thing capabilities of the dbt-glue adapter are as follows:
- Runs SQL as Spark SQL on AWS Glue interactive classes
- Manages desk definitions on the AWS Glue Information Catalog
- Helps open desk codecs comparable to Apache Hudi, Delta Lake, and Apache Iceberg
- Helps AWS Lake Formation permissions for fine-grained entry management
Along with these capabilities, the dbt-glue adapter is designed to optimize useful resource utilization with a number of methods on prime of AWS Glue interactive classes.
This publish demonstrates how the dbt-glue adapter helps your workload, and how one can construct a contemporary knowledge stack utilizing dbt and AWS Glue utilizing the dbt-glue adapter.
Widespread use instances
One frequent use case for utilizing dbt-glue is that if a central analytics staff at a big company is accountable for monitoring operational effectivity. They ingest software logs into uncooked Parquet tables in an Amazon Easy Storage Service (Amazon S3) knowledge lake. Moreover, they extract organized knowledge from operational techniques capturing the corporate’s organizational construction and prices of various operational parts that they saved within the uncooked zone utilizing Iceberg tables to keep up the unique schema, facilitating quick access to the information. The staff makes use of dbt-glue to construct a reworked gold mannequin optimized for enterprise intelligence (BI). The gold mannequin joins the technical logs with billing knowledge and organizes the metrics per enterprise unit. The gold mannequin makes use of Iceberg’s capability to help knowledge warehouse-style modeling wanted for performant BI analytics in a knowledge lake. The mixture of Iceberg and dbt-glue permits the staff to effectively construct a knowledge mannequin that’s able to be consumed.
One other frequent use case is when an analytics staff in an organization that has an S3 knowledge lake creates a brand new knowledge product as a way to enrich its current knowledge from its knowledge lake with medical knowledge. Let’s say that this firm is situated in Europe and the information product should adjust to the GDPR. For this, the corporate makes use of Iceberg to satisfy wants comparable to the precise to be forgotten and the deletion of knowledge. The corporate makes use of dbt to mannequin its knowledge product on its current knowledge lake because of its compatibility with AWS Glue and Iceberg and the simplicity that the dbt-glue adapter brings to using this storage format.
How dbt and dbt-glue work
The next are key dbt options:
- Challenge – A dbt mission enforces a top-level construction on the staging, fashions, permissions, and adapters. A mission could be checked right into a GitHub repo for model management.
- SQL – dbt depends on SQL choose statements for outlining knowledge transformation logic. As a substitute of uncooked SQL, dbt affords templatized SQL (utilizing Jinja) that enables code modularity. As a substitute of getting to repeat/paste SQL in a number of locations, knowledge engineers can outline modular transforms and name these from different locations throughout the mission. Having a modular pipeline helps knowledge engineers collaborate on the identical mission.
- Fashions – dbt fashions are primarily written as a SELECT assertion and saved as a .sql file. Information engineers outline dbt fashions for his or her knowledge representations. To be taught extra, confer with About dbt fashions.
- Materializations – Materializations are methods for persisting dbt fashions in a warehouse. There are 5 kinds of materializations constructed into dbt: desk, view, incremental, ephemeral, and materialized view. To be taught extra, confer with Materializations and Incremental fashions.
- Information lineage – dbt tracks knowledge lineage, permitting you to grasp the origin of knowledge and the way it flows by way of completely different transformations. dbt additionally helps affect evaluation, which helps determine the downstream results of modifications.
The high-level knowledge circulation is as follows:
- Information engineers ingest knowledge from knowledge sources to uncooked tables and outline desk definitions for the uncooked tables.
- Information engineers write dbt fashions with templatized SQL.
- The dbt adapter converts dbt fashions to SQL statements suitable in a knowledge warehouse.
- The information warehouse runs the SQL statements to create intermediate tables or closing tables, views, or materialized views.
The next diagram illustrates the structure.
dbt-glue works with the next steps:
- The dbt-glue adapter converts dbt fashions to SQL statements suitable in Spark SQL.
- AWS Glue interactive classes run the SQL statements to create intermediate tables or closing tables, views, or materialized views.
- dbt-glue helps
csv
,parquet
,hudi
,delta
, andiceberg
asfileformat
. - On the dbt-glue adapter, desk or incremental are generally used for materializations on the vacation spot. There are three methods for incremental materialization. The merge technique requires
hudi
,delta
, oriceberg
. With the opposite two methods,append
andinsert_overwrite
, you need to usecsv
,parquet
,hudi
,delta
, oriceberg
.
The next diagram illustrates this structure.
Instance use case
On this publish, we use the information from the New York Metropolis Taxi Data dataset. This dataset is offered within the Registry of Open Information on AWS (RODA), which is a repository containing public datasets from AWS assets. The uncooked Parquet desk information on this dataset shops journey information.
The target is to create the next three tables, which include metrics based mostly on the uncooked desk:
- silver_avg_metrics – Primary metrics based mostly on NYC Taxi Open Information for the 12 months 2016
- gold_passengers_metrics – Metrics per passenger based mostly on the silver metrics desk
- gold_cost_metrics – Metrics per value based mostly on the silver metrics desk
The ultimate purpose is to create two well-designed gold tables that retailer already aggregated leads to Iceberg format for advert hoc queries by way of Amazon Athena.
Conditions
The instruction requires following stipulations:
- An AWS Id and Entry Administration (IAM) function with all of the obligatory permissions to run an AWS Glue interactive session and the dbt-glue adapter
- An AWS Glue database and desk to retailer the metadata associated to the NYC taxi information dataset
- An S3 bucket to make use of as output and retailer the processed knowledge
- An Athena configuration (a workgroup and S3 bucket to retailer the output) to discover the dataset
- An AWS Lambda perform (created as an AWS CloudFormation customized useful resource) that updates all of the partitions within the AWS Glue desk
With these stipulations, we simulate the scenario that knowledge engineers have already ingested knowledge from knowledge sources to uncooked tables, and outlined desk definitions for the uncooked tables.
For ease of use, we ready a CloudFormation template. This template deploys all of the required infrastructure. To create these assets, select Launch Stack within the us-east-1
Area, and observe the directions:
Set up dbt, the dbt CLI, and the dbt adaptor
The dbt CLI is a command line interface for working dbt initiatives. It’s free to make use of and out there as an open supply mission. Set up dbt and the dbt CLI with the next code:
For extra info, confer with Methods to set up dbt, What’s dbt?, and Viewpoint.
Set up the dbt adapter with the next code:
Create a dbt mission
Full the next steps to create a dbt mission:
- Run the
dbt init
command to create and initialize a brand new empty dbt mission: - For the mission title, enter
dbt_glue_demo
. - For the database, select
glue
.
Now the empty mission has been created. The listing construction is proven as follows:
Create a supply
The subsequent step is to create a supply desk definition. We add fashions/source_tables.yml
with the next contents:
This supply definition corresponds to the AWS Glue desk nyctaxi.information
, which we created within the CloudFormation stack.
Create fashions
On this step, we create a dbt mannequin that represents the typical values for journey length, passenger depend, journey distance, and whole quantity of prices. Full the next steps:
- Create the
fashions/silver/
listing. - Create the file
fashions/silver/silver_avg_metrics.sql
with the next contents: - Create the file
fashions/silver/schema.yml
with the next contents: - Create the
fashions/gold/
listing. - Create the file
fashions/gold/gold_cost_metrics.sql
with the next contents: - Create the file
fashions/gold/gold_passengers_metrics.sql
with the next contents: - Create the file
fashions/gold/schema.yml
with the next contents: - Take away the
fashions/instance/
folder, as a result of it’s simply an instance created within thedbt init
command.
Configure the dbt mission
dbt_project.yml
is a key configuration file for dbt initiatives. It accommodates the next code:
We configure dbt_project.yml
to interchange the previous code with the next:
It is because that we wish to materialize the fashions beneath silver
as Parquet tables.
Configure a dbt profile
A dbt profile is a configuration that specifies how to connect with a selected database. The profiles are outlined within the profiles.yml
file inside a dbt mission.
Full the next steps to configure a dbt profile:
- Create the
profiles
listing. - Create the file
profiles/profiles.yml
with the next contents: - Create the
profiles/iceberg/
listing. - Create the file
profiles/iceberg/profiles.yml
with the next contents:
The final two traces are added for setting Iceberg configurations on AWS Glue interactive classes.
Run the dbt mission
Now it’s time to run the dbt mission. Full the next steps:
- To run the mission dbt, you need to be within the mission folder:
- The mission requires you to set surroundings variables as a way to run on the AWS account:
- Be sure that the profile is ready up accurately from the command line:
Should you see any failures, test if you happen to offered the proper IAM function ARN and S3 location in Step 2.
- Run the fashions with the next code:
Now the tables are efficiently created within the AWS Glue Information Catalog, and the information is materialized within the Amazon S3 location.
You may confirm these tables by opening the AWS Glue console, selecting Databases within the navigation pane, and opening dbt_glue_demo_nyc_metrics
.
Question materialized tables by way of Athena
Let’s question the goal desk utilizing Athena to confirm the materialized tables. Full the next steps:
- On the Athena console, swap the workgroup to
athena-dbt-glue-aws-blog
. - If the workgroup
athena-dbt-glue-aws-blog
settings dialog field seems, select Acknowledge. - Use the next question to discover the metrics created by the dbt mission:
The next screenshot reveals the outcomes of this question.
Evaluate dbt documentation
Full the next steps to evaluate your documentation:
- Generate the next documentation for the mission:
- Run the next command to open the documentation in your browser:
- Within the navigation pane, select
gold_cost_metrics
beneathdbt_glue_demo/fashions/gold
.
You may see the detailed view of the mannequin gold_cost_metrics
, as proven within the following screenshot.
- To see the lineage graph, select the circle icon on the backside proper.
Clear up
To scrub up your surroundings, full the next steps:
- Delete the database created by dbt:
- Delete all generated knowledge:
- Delete the CloudFormation stack:
Conclusion
This publish demonstrated how the dbt-glue adapter helps your workload, and how one can construct a contemporary knowledge stack utilizing dbt and AWS Glue utilizing the dbt-glue adapter. You discovered the end-to-end operations and knowledge circulation for knowledge engineers to construct and handle a knowledge stack utilizing dbt and the dbt-glue adapter. To report points or request a function enhancement, be happy to open a difficulty on GitHub.
Concerning the authors
Noritaka Sekiyama is a Principal Large Information Architect on the AWS Glue staff at Amazon Internet Companies. He works based mostly in Tokyo, Japan. He’s accountable for constructing software program artifacts to assist prospects. In his spare time, he enjoys biking together with his street bike.
Benjamin Menuet is a Senior Information Architect on the AWS Skilled Companies staff at Amazon Internet Companies. He helps prospects develop knowledge and analytics options to speed up their enterprise outcomes. Exterior of labor, Benjamin is a path runner and has completed some iconic races just like the UTMB.
Akira Ajisaka is a Senior Software program Improvement Engineer on the AWS Glue staff. He likes open supply software program and distributed techniques. In his spare time, he enjoys taking part in arcade video games.
Kinshuk Pahare is a Principal Product Supervisor on the AWS Glue staff at Amazon Internet Companies.
Jason Ganz is the supervisor of the Developer Expertise (DX) staff at dbt Labs