Thursday, November 7, 2024

Construct and handle your fashionable knowledge stack utilizing dbt and AWS Glue by way of dbt-glue, the brand new “trusted” dbt adapter

dbt is an open supply, SQL-first templating engine that permits you to write repeatable and extensible knowledge transforms in Python and SQL. dbt focuses on the rework layer of extract, load, rework (ELT) or extract, rework, load (ETL) processes throughout knowledge warehouses and databases by way of particular engine adapters to realize extract and cargo performance. It permits knowledge engineers, knowledge scientists, and analytics engineers to outline the enterprise logic with SQL choose statements and eliminates the necessity to write boilerplate knowledge manipulation language (DML) and knowledge definition language (DDL) expressions. dbt lets knowledge engineers shortly and collaboratively deploy analytics code following software program engineering greatest practices like modularity, portability, steady integration and steady supply (CI/CD), and documentation.

dbt is predominantly utilized by knowledge warehouses (comparable to Amazon Redshift) prospects who want to hold their knowledge rework logic separate from storage and engine. We have now seen a robust buyer demand to broaden its scope to cloud-based knowledge lakes as a result of knowledge lakes are more and more the enterprise answer for large-scale knowledge initiatives because of their energy and capabilities.

In 2022, AWS printed a dbt adapter referred to as dbt-glue—the open supply, battle-tested dbt AWS Glue adapter that enables knowledge engineers to make use of dbt for cloud-based knowledge lakes together with knowledge warehouses and databases, paying for simply the compute they want. The dbt-glue adapter democratized entry for dbt customers to knowledge lakes, and enabled many customers to effortlessly run their transformation workloads on the cloud with the serverless knowledge integration functionality of AWS Glue. From the launch of the adapter, AWS has continued investing into dbt-glue to cowl extra necessities.

At this time, we’re happy to announce that the dbt-glue adapter is now a trusted adapter based mostly on our strategic collaboration with dbt Labs. Trusted adapters are adapters not maintained by dbt Labs, however adaptors that that dbt Lab is snug recommending to customers to be used in manufacturing.

The important thing capabilities of the dbt-glue adapter are as follows:

  • Runs SQL as Spark SQL on AWS Glue interactive classes
  • Manages desk definitions on the AWS Glue Information Catalog
  • Helps open desk codecs comparable to Apache Hudi, Delta Lake, and Apache Iceberg
  • Helps AWS Lake Formation permissions for fine-grained entry management

Along with these capabilities, the dbt-glue adapter is designed to optimize useful resource utilization with a number of methods on prime of AWS Glue interactive classes.

This publish demonstrates how the dbt-glue adapter helps your workload, and how one can construct a contemporary knowledge stack utilizing dbt and AWS Glue utilizing the dbt-glue adapter.

Widespread use instances

One frequent use case for utilizing dbt-glue is that if a central analytics staff at a big company is accountable for monitoring operational effectivity. They ingest software logs into uncooked Parquet tables in an Amazon Easy Storage Service (Amazon S3) knowledge lake. Moreover, they extract organized knowledge from operational techniques capturing the corporate’s organizational construction and prices of various operational parts that they saved within the uncooked zone utilizing Iceberg tables to keep up the unique schema, facilitating quick access to the information. The staff makes use of dbt-glue to construct a reworked gold mannequin optimized for enterprise intelligence (BI). The gold mannequin joins the technical logs with billing knowledge and organizes the metrics per enterprise unit. The gold mannequin makes use of Iceberg’s capability to help knowledge warehouse-style modeling wanted for performant BI analytics in a knowledge lake. The mixture of Iceberg and dbt-glue permits the staff to effectively construct a knowledge mannequin that’s able to be consumed.

One other frequent use case is when an analytics staff in an organization that has an S3 knowledge lake creates a brand new knowledge product as a way to enrich its current knowledge from its knowledge lake with medical knowledge. Let’s say that this firm is situated in Europe and the information product should adjust to the GDPR. For this, the corporate makes use of Iceberg to satisfy wants comparable to the precise to be forgotten and the deletion of knowledge. The corporate makes use of dbt to mannequin its knowledge product on its current knowledge lake because of its compatibility with AWS Glue and Iceberg and the simplicity that the dbt-glue adapter brings to using this storage format.

How dbt and dbt-glue work

The next are key dbt options:

  • Challenge – A dbt mission enforces a top-level construction on the staging, fashions, permissions, and adapters. A mission could be checked right into a GitHub repo for model management.
  • SQL – dbt depends on SQL choose statements for outlining knowledge transformation logic. As a substitute of uncooked SQL, dbt affords templatized SQL (utilizing Jinja) that enables code modularity. As a substitute of getting to repeat/paste SQL in a number of locations, knowledge engineers can outline modular transforms and name these from different locations throughout the mission. Having a modular pipeline helps knowledge engineers collaborate on the identical mission.
  • Fashions – dbt fashions are primarily written as a SELECT assertion and saved as a .sql file. Information engineers outline dbt fashions for his or her knowledge representations. To be taught extra, confer with About dbt fashions.
  • Materializations – Materializations are methods for persisting dbt fashions in a warehouse. There are 5 kinds of materializations constructed into dbt: desk, view, incremental, ephemeral, and materialized view. To be taught extra, confer with Materializations and Incremental fashions.
  • Information lineage – dbt tracks knowledge lineage, permitting you to grasp the origin of knowledge and the way it flows by way of completely different transformations. dbt additionally helps affect evaluation, which helps determine the downstream results of modifications.

The high-level knowledge circulation is as follows:

  1. Information engineers ingest knowledge from knowledge sources to uncooked tables and outline desk definitions for the uncooked tables.
  2. Information engineers write dbt fashions with templatized SQL.
  3. The dbt adapter converts dbt fashions to SQL statements suitable in a knowledge warehouse.
  4. The information warehouse runs the SQL statements to create intermediate tables or closing tables, views, or materialized views.

The next diagram illustrates the structure.

dbt-glue works with the next steps:

  1. The dbt-glue adapter converts dbt fashions to SQL statements suitable in Spark SQL.
  2. AWS Glue interactive classes run the SQL statements to create intermediate tables or closing tables, views, or materialized views.
  3. dbt-glue helps csv, parquet, hudi, delta, and iceberg as fileformat.
  4. On the dbt-glue adapter, desk or incremental are generally used for materializations on the vacation spot. There are three methods for incremental materialization. The merge technique requires hudi, delta, or iceberg. With the opposite two methods, append and insert_overwrite, you need to use csv, parquet, hudi, delta, or iceberg.

The next diagram illustrates this structure.

Instance use case

On this publish, we use the information from the New York Metropolis Taxi Data dataset. This dataset is offered within the Registry of Open Information on AWS (RODA), which is a repository containing public datasets from AWS assets. The uncooked Parquet desk information on this dataset shops journey information.

The target is to create the next three tables, which include metrics based mostly on the uncooked desk:

  • silver_avg_metrics – Primary metrics based mostly on NYC Taxi Open Information for the 12 months 2016
  • gold_passengers_metrics – Metrics per passenger based mostly on the silver metrics desk
  • gold_cost_metrics – Metrics per value based mostly on the silver metrics desk

The ultimate purpose is to create two well-designed gold tables that retailer already aggregated leads to Iceberg format for advert hoc queries by way of Amazon Athena.

Conditions

The instruction requires following stipulations:

  • An AWS Id and Entry Administration (IAM) function with all of the obligatory permissions to run an AWS Glue interactive session and the dbt-glue adapter
  • An AWS Glue database and desk to retailer the metadata associated to the NYC taxi information dataset
  • An S3 bucket to make use of as output and retailer the processed knowledge
  • An Athena configuration (a workgroup and S3 bucket to retailer the output) to discover the dataset
  • An AWS Lambda perform (created as an AWS CloudFormation customized useful resource) that updates all of the partitions within the AWS Glue desk

With these stipulations, we simulate the scenario that knowledge engineers have already ingested knowledge from knowledge sources to uncooked tables, and outlined desk definitions for the uncooked tables.

For ease of use, we ready a CloudFormation template. This template deploys all of the required infrastructure. To create these assets, select Launch Stack within the us-east-1 Area, and observe the directions:

Set up dbt, the dbt CLI, and the dbt adaptor

The dbt CLI is a command line interface for working dbt initiatives. It’s free to make use of and out there as an open supply mission. Set up dbt and the dbt CLI with the next code:

$ pip3 set up --no-cache-dir dbt-core

For extra info, confer with Methods to set up dbt, What’s dbt?, and Viewpoint.

Set up the dbt adapter with the next code:

$ pip3 set up --no-cache-dir dbt-glue

Create a dbt mission

Full the next steps to create a dbt mission:

  1. Run the dbt init command to create and initialize a brand new empty dbt mission:
  2. For the mission title, enter dbt_glue_demo.
  3. For the database, select glue.

Now the empty mission has been created. The listing construction is proven as follows:

$ cd dbt_glue_demo 
$ tree .
.
├── README.md
├── analyses
├── dbt_project.yml
├── macros
├── fashions
│   └── instance
│       ├── my_first_dbt_model.sql
│       ├── my_second_dbt_model.sql
│       └── schema.yml
├── seeds
├── snapshots
└── exams

Create a supply

The subsequent step is to create a supply desk definition. We add fashions/source_tables.yml with the next contents:

model: 2

sources:
  - title: data_source
    schema: nyctaxi

    tables:
      - title: information

This supply definition corresponds to the AWS Glue desk nyctaxi.information, which we created within the CloudFormation stack.

Create fashions

On this step, we create a dbt mannequin that represents the typical values for journey length, passenger depend, journey distance, and whole quantity of prices. Full the next steps:

  1. Create the fashions/silver/ listing.
  2. Create the file fashions/silver/silver_avg_metrics.sql with the next contents:
    WITH source_avg as ( 
        SELECT avg((CAST(dropoff_datetime as LONG) - CAST(pickup_datetime as LONG))/60) as avg_duration 
        , avg(passenger_count) as avg_passenger_count 
        , avg(trip_distance) as avg_trip_distance 
        , avg(total_amount) as avg_total_amount
        , 12 months
        , month 
        , kind
        FROM {{ supply('data_source', 'information') }} 
        WHERE 12 months = "2016"
        AND dropoff_datetime isn't null 
        GROUP BY 12 months, month, kind
    ) 
    SELECT *
    FROM source_avg

  3. Create the file fashions/silver/schema.yml with the next contents:
    model: 2
    
    fashions:
      - title: silver_avg_metrics
        description: This desk has primary metrics based mostly on NYC Taxi Open Information for the 12 months 2016
    
        columns:
          - title: avg_duration
            description: The common length of a NYC Taxi journey
    
          - title: avg_passenger_count
            description: The common variety of passenger per NYC Taxi journey
    
          - title: avg_trip_distance
            description: The common NYC Taxi journey distance
    
          - title: avg_total_amount
            description: The avarage quantity of a NYC Taxi journey
    
          - title: 12 months
            description: The 12 months of the NYC Taxi journey
    
          - title: month
            description: The month of the NYC Taxi journey 
    
          - title: kind
            description: The kind of the NYC Taxi 

  4. Create the fashions/gold/ listing.
  5. Create the file fashions/gold/gold_cost_metrics.sql with the next contents:
    {{ config(
        materialized='incremental',
        incremental_strategy='merge',
        unique_key=["year", "month", "type"],
        file_format="iceberg",
        iceberg_expire_snapshots="False",
        table_properties={'format-version': '2'}
    ) }}
    SELECT (avg_total_amount/avg_trip_distance) as avg_cost_per_distance
    , (avg_total_amount/avg_duration) as avg_cost_per_minute
    , 12 months
    , month 
    , kind 
    FROM {{ ref('silver_avg_metrics') }}

  6. Create the file fashions/gold/gold_passengers_metrics.sql with the next contents:
    {{ config(
        materialized='incremental',
        incremental_strategy='merge',
        unique_key=["year", "month", "type"],
        file_format="iceberg",
        iceberg_expire_snapshots="False",
        table_properties={'format-version': '2'}
    ) }}
    SELECT (avg_total_amount/avg_passenger_count) as avg_cost_per_passenger
    , (avg_duration/avg_passenger_count) as avg_duration_per_passenger
    , (avg_trip_distance/avg_passenger_count) as avg_trip_distance_per_passenger
    , 12 months
    , month 
    , kind 
    FROM {{ ref('silver_avg_metrics') }}

  7. Create the file fashions/gold/schema.yml with the next contents:
    model: 2
    
    fashions:
      - title: gold_cost_metrics
        description: This desk has metrics per value based mostly on NYC Taxi Open Information
    
        columns:
          - title: avg_cost_per_distance
            description: The common value per distance of a NYC Taxi journey
    
          - title: avg_cost_per_minute
            description: The common value per minute of a NYC Taxi journey
    
          - title: 12 months
            description: The 12 months of the NYC Taxi journey
    
          - title: month
            description: The month of the NYC Taxi journey
    
          - title: kind
            description: The kind of the NYC Taxi
    
      - title: gold_passengers_metrics
        description: This desk has metrics per passenger based mostly on NYC Taxi Open Information
    
        columns:
          - title: avg_cost_per_passenger
            description: The common value per passenger for a NYC Taxi journey
    
          - title: avg_duration_per_passenger
            description: The common variety of passenger per NYC Taxi journey
    
          - title: avg_trip_distance_per_passenger
            description: The common NYC Taxi journey distance
    
          - title: 12 months
            description: The 12 months of the NYC Taxi journey
    
          - title: month
            description: The month of the NYC Taxi journey 
    
          - title: kind
            description: The kind of the NYC Taxi

  8. Take away the fashions/instance/ folder, as a result of it’s simply an instance created within the dbt init command.

Configure the dbt mission

dbt_project.yml is a key configuration file for dbt initiatives. It accommodates the next code:

fashions:
  dbt_glue_demo:
    # Config indicated by + and applies to all information beneath fashions/instance/
    instance:
      +materialized: view

We configure dbt_project.yml to interchange the previous code with the next:

fashions:
  dbt_glue_demo:
    silver:
      +materialized: desk

It is because that we wish to materialize the fashions beneath silver as Parquet tables.

Configure a dbt profile

A dbt profile is a configuration that specifies how to connect with a selected database. The profiles are outlined within the profiles.yml file inside a dbt mission.

Full the next steps to configure a dbt profile:

  1. Create the profiles listing.
  2. Create the file profiles/profiles.yml with the next contents:
    dbt_glue_demo:
      goal: dev
      outputs:
        dev:
          kind: glue
          query-comment: demo-nyctaxi
          role_arn: "{{ env_var('DBT_ROLE_ARN') }}"
          area: us-east-1
          staff: 5
          worker_type: G.1X
          schema: "dbt_glue_demo_nyc_metrics"
          database: "dbt_glue_demo_nyc_metrics"
          session_provisioning_timeout_in_seconds: 120
          location: "{{ env_var('DBT_S3_LOCATION') }}"

  3. Create the profiles/iceberg/ listing.
  4. Create the file profiles/iceberg/profiles.yml with the next contents:
    dbt_glue_demo:
      goal: dev
      outputs:
        dev:
          kind: glue
          query-comment: demo-nyctaxi
          role_arn: "{{ env_var('DBT_ROLE_ARN') }}"
          area: us-east-1
          staff: 5
          worker_type: G.1X
          schema: "dbt_glue_demo_nyc_metrics"
          database: "dbt_glue_demo_nyc_metrics"
          session_provisioning_timeout_in_seconds: 120
          location: "{{ env_var('DBT_S3_LOCATION') }}"
          datalake_formats: "iceberg"
          conf: --conf spark.sql.catalog.glue_catalog=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.glue_catalog.warehouse="{{ env_var('DBT_S3_LOCATION') }}"warehouse/ --conf spark.sql.catalog.glue_catalog.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog --conf spark.sql.catalog.glue_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions"

The final two traces are added for setting Iceberg configurations on AWS Glue interactive classes.

Run the dbt mission

Now it’s time to run the dbt mission. Full the next steps:

  1. To run the mission dbt, you need to be within the mission folder:
  2. The mission requires you to set surroundings variables as a way to run on the AWS account:
    $ export DBT_ROLE_ARN="arn:aws:iam::$(aws sts get-caller-identity --query "Account" --output textual content):function/GlueInteractiveSessionRole"
    $ export DBT_S3_LOCATION="s3://aws-dbt-glue-datalake-$(aws sts get-caller-identity --query "Account" --output textual content)-us-east-1"

  3. Be sure that the profile is ready up accurately from the command line:
    $ dbt debug --profiles-dir profiles
    ...
    05:34:22 Connection take a look at: [OK connection ok]
    05:34:22 All checks handed!

Should you see any failures, test if you happen to offered the proper IAM function ARN and S3 location in Step 2.

  1. Run the fashions with the next code:
    $ dbt run -m silver --profiles-dir profiles
    $ dbt run -m gold --profiles-dir profiles/iceberg/

Now the tables are efficiently created within the AWS Glue Information Catalog, and the information is materialized within the Amazon S3 location.

You may confirm these tables by opening the AWS Glue console, selecting Databases within the navigation pane, and opening dbt_glue_demo_nyc_metrics.

Question materialized tables by way of Athena

Let’s question the goal desk utilizing Athena to confirm the materialized tables. Full the next steps:

  1. On the Athena console, swap the workgroup to athena-dbt-glue-aws-blog.
  2. If the workgroup athena-dbt-glue-aws-blog settings dialog field seems, select Acknowledge.
  3. Use the next question to discover the metrics created by the dbt mission:
    SELECT cm.avg_cost_per_minute
        , cm.avg_cost_per_distance
        , pm.avg_cost_per_passenger
        , cm.12 months
        , cm.month
        , cm.kind
    FROM "dbt_glue_demo_nyc_metrics"."gold_passengers_metrics" pm
    LEFT JOIN "dbt_glue_demo_nyc_metrics"."gold_cost_metrics" cm
        ON cm.kind = pm.kind
        AND cm.12 months = pm.12 months
        AND cm.month = pm.month
    WHERE cm.kind="yellow"
        AND cm.12 months="2016"
        AND cm.month="6"

The next screenshot reveals the outcomes of this question.

Evaluate dbt documentation

Full the next steps to evaluate your documentation:

  1. Generate the next documentation for the mission:
    $ dbt docs generate --profiles-dir profiles/iceberg
    11:41:51  Working with dbt=1.7.1
    11:41:51  Registered adapter: glue=1.7.1
    11:41:51  Unable to do partial parsing as a result of profile has modified
    11:41:52  Discovered 3 fashions, 1 supply, 0 exposures, 0 metrics, 478 macros, 0 teams, 0 semantic fashions
    11:41:52  
    11:41:53  Concurrency: 1 threads (goal="dev")
    11:41:53  
    11:41:53  Constructing catalog
    11:43:32  Catalog written to /Customers/username/Paperwork/workspace/dbt_glue_demo/goal/catalog.json

  2. Run the next command to open the documentation in your browser:
    $ dbt docs serve --profiles-dir profiles/iceberg

  3. Within the navigation pane, select gold_cost_metrics beneath dbt_glue_demo/fashions/gold.

You may see the detailed view of the mannequin gold_cost_metrics, as proven within the following screenshot.

  1. To see the lineage graph, select the circle icon on the backside proper.

Clear up

To scrub up your surroundings, full the next steps:

  1. Delete the database created by dbt:
    $ aws glue delete-database —title dbt_glue_demo_nyc_metrics

  2. Delete all generated knowledge:
    $ aws s3 rm s3://aws-dbt-glue-datalake-$(aws sts get-caller-identity —question "Account" —output textual content)-us-east-1/ —recursive
    $ aws s3 rm s3://aws-athena-dbt-glue-query-results-$(aws sts get-caller-identity —question "Account" —output textual content)-us-east-1/ —recursive

  3. Delete the CloudFormation stack:
    $ aws cloudformation delete-stack —stack-name dbt-demo

Conclusion

This publish demonstrated how the dbt-glue adapter helps your workload, and how one can construct a contemporary knowledge stack utilizing dbt and AWS Glue utilizing the dbt-glue adapter. You discovered the end-to-end operations and knowledge circulation for knowledge engineers to construct and handle a knowledge stack utilizing dbt and the dbt-glue adapter. To report points or request a function enhancement, be happy to open a difficulty on GitHub.


Concerning the authors

Noritaka Sekiyama is a Principal Large Information Architect on the AWS Glue staff at Amazon Internet Companies. He works based mostly in Tokyo, Japan. He’s accountable for constructing software program artifacts to assist prospects. In his spare time, he enjoys biking together with his street bike.

Benjamin Menuet is a Senior Information Architect on the AWS Skilled Companies staff at Amazon Internet Companies. He helps prospects develop knowledge and analytics options to speed up their enterprise outcomes. Exterior of labor, Benjamin is a path runner and has completed some iconic races just like the UTMB.

Akira Ajisaka is a Senior Software program Improvement Engineer on the AWS Glue staff. He likes open supply software program and distributed techniques. In his spare time, he enjoys taking part in arcade video games.

Kinshuk Pahare is a Principal Product Supervisor on the AWS Glue staff at Amazon Internet Companies.

Jason Ganz is the supervisor of the Developer Expertise (DX) staff at dbt Labs

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles