Skip to main content

dbt

There are 2 sources that provide integration with dbt

Source ModuleDocumentation

dbt

The artifacts used by this source are:

  • dbt manifest file
    • This file contains model, source, tests and lineage data.
  • dbt catalog file
    • This file contains schema data.
    • dbt does not record schema data for Ephemeral models, as such datahub will show Ephemeral models in the lineage, however there will be no associated schema for Ephemeral models
  • dbt sources file
    • This file contains metadata for sources with freshness checks.
    • We transfer dbt's freshness checks to DataHub's last-modified fields.
    • Note that this file is optional – if not specified, we'll use time of ingestion instead as a proxy for time last-modified.
  • dbt run_results file
    • This file contains metadata from the result of a dbt run, e.g. dbt test
    • When provided, we transfer dbt test run results into assertion run events to see a timeline of test runs on the dataset Read more...

dbt-cloud

This source pulls dbt metadata directly from the dbt Cloud APIs.

You'll need to have a dbt Cloud job set up to run your dbt project, and "Generate docs on run" should be enabled.

The token should have the "read metadata" permission.

To get the required IDs, go to the job details page (this is the one with the "Run History" table), and look at the URL. It should look something like this: https://cloud.getdbt.com/next/deploy/107298/projects/175705/jobs/148094. In this example, the account ID is 107298, the project ID is 175705, and the job ID is 148094. Read more...

Ingesting metadata from dbt requires either using the dbt module or the dbt-cloud module.

Concept Mapping

Source ConceptDataHub ConceptNotes
"dbt"Data Platform
dbt SourceDatasetSubtype source
dbt SeedDatasetSubtype seed
dbt Model - materializedDatasetSubtype table
dbt Model - viewDatasetSubtype view
dbt Model - incrementalDatasetSubtype incremental
dbt Model - ephemeralDatasetSubtype ephemeral
dbt SnapshotDatasetSubtype snapshot
dbt TestAssertion
dbt Test ResultAssertion Run Result

Note:

  1. It also generates lineage between the dbt nodes (e.g. ephemeral nodes that depend on other dbt sources) as well as lineage between the dbt nodes and the underlying (target) platform nodes (e.g. BigQuery Table -> dbt Source, dbt View -> BigQuery View).
  2. We also support automated actions (like add a tag, term or owner) based on properties defined in dbt meta.

Module dbt

Certified

Important Capabilities

CapabilityStatusNotes
Dataset Usage
Detect Deleted EntitiesEnabled via stateful ingestion
Table-Level LineageEnabled by default

The artifacts used by this source are:

  • dbt manifest file
    • This file contains model, source, tests and lineage data.
  • dbt catalog file
    • This file contains schema data.
    • dbt does not record schema data for Ephemeral models, as such datahub will show Ephemeral models in the lineage, however there will be no associated schema for Ephemeral models
  • dbt sources file
    • This file contains metadata for sources with freshness checks.
    • We transfer dbt's freshness checks to DataHub's last-modified fields.
    • Note that this file is optional – if not specified, we'll use time of ingestion instead as a proxy for time last-modified.
  • dbt run_results file
    • This file contains metadata from the result of a dbt run, e.g. dbt test
    • When provided, we transfer dbt test run results into assertion run events to see a timeline of test runs on the dataset

CLI based Ingestion

Install the Plugin

pip install 'acryl-datahub[dbt]'

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "dbt"
config:
# Coordinates
# To use this as-is, set the environment variable DBT_PROJECT_ROOT to the root folder of your dbt project
manifest_path: "${DBT_PROJECT_ROOT}/target/manifest_file.json"
catalog_path: "${DBT_PROJECT_ROOT}/target/catalog_file.json"
sources_path: "${DBT_PROJECT_ROOT}/target/sources_file.json" # optional for freshness
test_results_path: "${DBT_PROJECT_ROOT}/target/run_results.json" # optional for recording dbt test results after running dbt test

# Options
target_platform: "my_target_platform_id" # e.g. bigquery/postgres/etc.

# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

View All Configuration Options
Field [Required]TypeDescriptionDefaultNotes
catalog_path [✅]stringPath to dbt catalog JSON. See https://docs.getdbt.com/reference/artifacts/catalog-json Note this can be a local file or a URI.None
column_meta_mapping [✅]objectmapping rules that will be executed against dbt column meta properties. Refer to the section below on dbt meta automated mappings.None
convert_column_urns_to_lowercase [✅]booleanWhen enabled, converts column URNs to lowercase to ensure cross-platform compatibility. If target_platform is Snowflake, the default is True.None
enable_meta_mapping [✅]booleanWhen enabled, applies the mappings that are defined through the meta_mapping directives.True
enable_owner_extraction [✅]booleanWhen enabled, ownership info will be extracted from the dbt metaTrue
enable_query_tag_mapping [✅]booleanWhen enabled, applies the mappings that are defined through the query_tag_mapping directives.True
include_env_in_assertion_guid [✅]booleanPrior to version 0.9.4.2, the assertion GUIDs did not include the environment. If you're using multiple dbt ingestion that are only distinguished by env, then you should set this flag to True.None
incremental_lineage [✅]booleanWhen enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.None
manifest_path [✅]stringPath to dbt manifest JSON. See https://docs.getdbt.com/reference/artifacts/manifest-json Note this can be a local file or a URI.None
meta_mapping [✅]objectmapping rules that will be executed against dbt meta properties. Refer to the section below on dbt meta automated mappings.None
owner_extraction_pattern [✅]stringRegex string to extract owner from the dbt node using the (?P<name>...) syntax of the match object, where the group name must be owner. Examples: (1)r"(?P<owner>(.*)): (\w+) (\w+)" will extract jdoe as the owner from "jdoe: John Doe" (2) r"@(?P<owner>(.*))" will extract alice as the owner from "@alice".None
platform_instance [✅]stringThe instance of the platform that all assets produced by this recipe belong toNone
query_tag_mapping [✅]objectmapping rules that will be executed against dbt query_tag meta properties. Refer to the section below on dbt meta automated mappings.None
sources_path [✅]stringPath to dbt sources JSON. See https://docs.getdbt.com/reference/artifacts/sources-json. If not specified, last-modified fields will not be populated. Note this can be a local file or a URI.None
sql_parser_use_external_process [✅]booleanWhen enabled, sql parser will run in isolated in a separate process. This can affect processing time but can protect from sql parser's mem leak.None
strip_user_ids_from_email [✅]booleanWhether or not to strip email id while adding owners using dbt meta actions.None
tag_prefix [✅]stringPrefix added to tags during ingestion.dbt:
target_platform [✅]stringThe platform that dbt is loading onto. (e.g. bigquery / redshift / postgres etc.)None
target_platform_instance [✅]stringThe platform instance for the platform that dbt is operating on. Use this if you have multiple instances of the same platform (e.g. redshift) and need to distinguish between them.None
test_results_path [✅]stringPath to output of dbt test run as run_results file in JSON format. See https://docs.getdbt.com/reference/artifacts/run-results-json. If not specified, test execution results will not be populated in DataHub.None
use_identifiers [✅]booleanUse model identifier instead of model name if defined (if not, default to model name).None
write_semantics [✅]stringWhether the new tags, terms and owners to be added will override the existing ones added only by this source or not. Value for this config can be "PATCH" or "OVERRIDE"PATCH
env [✅]stringEnvironment to use in namespace when constructing URNs.PROD
aws_connection [✅]AwsConnectionConfigWhen fetching manifest files from s3, configuration for aws connection detailsNone
aws_connection.aws_access_key_id [❓ (required if aws_connection is set)]stringAWS access key ID. Can be auto-detected, see https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for details.None
aws_connection.aws_endpoint_url [❓ (required if aws_connection is set)]stringAutodetected. See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.htmlNone
aws_connection.aws_profile [❓ (required if aws_connection is set)]stringNamed AWS profile to use. Only used if access key / secret are unset. If not set the default will be usedNone
aws_connection.aws_proxy [❓ (required if aws_connection is set)]map(str,string)None
aws_connection.aws_region [❓ (required if aws_connection is set)]stringAWS region code.None
aws_connection.aws_secret_access_key [❓ (required if aws_connection is set)]stringAWS secret access key. Can be auto-detected, see https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for details.None
aws_connection.aws_session_token [❓ (required if aws_connection is set)]stringAWS session token. Can be auto-detected, see https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for details.None
aws_connection.aws_role [❓ (required if aws_connection is set)]UnionType (See notes for variants)AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are documented at https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html?highlight=assume_role#STS.Client.assume_roleNoneOne of string,union(anyOf),string,AwsAssumeRoleConfig
aws_connection.aws_role.ExternalId [❓ (required if aws_role is set)]stringExternal ID to use when assuming the role.None
aws_connection.aws_role.RoleArn [❓ (required if aws_role is set)]stringARN of the role to assume.None
entities_enabled [✅]DBTEntitiesEnabledControls for enabling / disabling metadata emission for different dbt entities (models, test definitions, test results, etc.){'models': 'YES', 'sources': 'YES', 'seeds': 'YES', 'snapshots': 'YES', 'test_definitions': 'YES', 'test_results': 'YES'}
entities_enabled.models [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt models when set to Yes or OnlyYES
entities_enabled.seeds [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt seeds when set to Yes or OnlyYES
entities_enabled.snapshots [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt snapshots when set to Yes or OnlyYES
entities_enabled.sources [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt sources when set to Yes or OnlyYES
entities_enabled.test_definitions [❓ (required if entities_enabled is set)]EnumEmit metadata for test definitions when enabled when set to Yes or OnlyYES
entities_enabled.test_results [❓ (required if entities_enabled is set)]EnumEmit metadata for test results when set to Yes or OnlyYES
git_info [✅]GitReferenceReference to your git location to enable easy navigation from DataHub to your dbt files.None
git_info.branch [❓ (required if git_info is set)]stringBranch on which your files live by default. Typically main or master. This can also be a commit hash.main
git_info.repo [❓ (required if git_info is set)]stringName of your Git repo e.g. https://github.com/datahub-project/datahub or https://gitlab.com/gitlab-org/gitlab. If organization/repo is provided, we assume it is a GitHub repo.None
git_info.url_template [❓ (required if git_info is set)]stringTemplate for generating a URL to a file in the repo e.g. '{repo_url}/blob/{branch}/{file_path}'. We can infer this for GitHub and GitLab repos, and it is otherwise required.It supports the following variables: {repo_url}, {branch}, {file_path}None
node_name_pattern [✅]AllowDenyPatternregex patterns for dbt model names to filter in ingestion.{'allow': ['.*'], 'deny': [], 'ignoreCase': True}
node_name_pattern.allow [❓ (required if node_name_pattern is set)]array(string)None
node_name_pattern.deny [❓ (required if node_name_pattern is set)]array(string)None
node_name_pattern.ignoreCase [❓ (required if node_name_pattern is set)]booleanWhether to ignore case sensitivity during pattern matching.True
stateful_ingestion [✅]StatefulStaleMetadataRemovalConfigDBT Stateful Ingestion Config.None
stateful_ingestion.enabled [❓ (required if stateful_ingestion is set)]booleanThe type of the ingestion state provider registered with datahub.None
stateful_ingestion.ignore_new_state [❓ (required if stateful_ingestion is set)]booleanIf set to True, ignores the current checkpoint state.None
stateful_ingestion.ignore_old_state [❓ (required if stateful_ingestion is set)]booleanIf set to True, ignores the previous checkpoint state.None
stateful_ingestion.remove_stale_metadata [❓ (required if stateful_ingestion is set)]booleanSoft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.True

dbt meta automated mappings

dbt allows authors to define meta properties for datasets. Checkout this link to know more - dbt meta. Our dbt source allows users to define actions such as add a tag, term or owner. For example if a dbt model has a meta config "has_pii": True, we can define an action that evaluates if the property is set to true and add, lets say, a pii tag. To leverage this feature we require users to define mappings as part of the recipe. The following section describes how you can build these mappings. Listed below is a meta_mapping and column_meta_mapping section that among other things, looks for keys like business_owner and adds owners that are listed there.

meta_mapping:
business_owner:
match: ".*"
operation: "add_owner"
config:
owner_type: user
owner_category: BUSINESS_OWNER
has_pii:
match: True
operation: "add_tag"
config:
tag: "has_pii_test"
int_property:
match: 1
operation: "add_tag"
config:
tag: "int_meta_property"
double_property:
match: 2.5
operation: "add_term"
config:
term: "double_meta_property"
data_governance.team_owner:
match: "Finance"
operation: "add_term"
config:
term: "Finance_test"
terms_list:
match: ".*"
operation: "add_terms"
config:
separator: ","
column_meta_mapping:
terms_list:
match: ".*"
operation: "add_terms"
config:
separator: ","
is_sensitive:
match: True
operation: "add_tag"
config:
tag: "sensitive"

We support the following operations:

  1. add_tag - Requires tag property in config.
  2. add_term - Requires term property in config.
  3. add_terms - Accepts an optional separator property in config.
  4. add_owner - Requires owner_type property in config which can be either user or group. Optionally accepts the owner_category config property which you can set to one of ['TECHNICAL_OWNER', 'BUSINESS_OWNER', 'DATA_STEWARD', 'DATAOWNER' (defaults to DATAOWNER).

Note:

  1. The dbt meta_mapping config works at the model level, while the column_meta_mapping config works at the column level. The add_owner operation is not supported at the column level.
  2. For string meta properties we support regex matching.

With regex matching, you can also use the matched value to customize how you populate the tag, term or owner fields. Here are a few advanced examples:

Data Tier - Bronze, Silver, Gold

If your meta section looks like this:

meta:
data_tier: Bronze # chosen from [Bronze,Gold,Silver]

and you wanted to attach a glossary term like urn:li:glossaryTerm:Bronze for all the models that have this value in the meta section attached to them, the following meta_mapping section would achieve that outcome:

meta_mapping:
data_tier:
match: "Bronze|Silver|Gold"
operation: "add_term"
config:
term: "{{ $match }}"

to match any data_tier of Bronze, Silver or Gold and maps it to a glossary term with the same name.

Case Numbers - create tags

If your meta section looks like this:

meta:
case: PLT-4678 # internal Case Number

and you want to generate tags that look like case_4678 from this, you can use the following meta_mapping section:

meta_mapping:
case:
match: "PLT-(.*)"
operation: "add_tag"
config:
tag: "case_{{ $match }}"

Stripping out leading @ sign

You can also match specific groups within the value to extract subsets of the matched value. e.g. if you have a meta section that looks like this:

meta:
owner: "@finance-team"
business_owner: "@janet"

and you want to mark the finance-team as a group that owns the dataset (skipping the leading @ sign), while marking janet as an individual user (again, skipping the leading @ sign) that owns the dataset, you can use the following meta-mapping section.

meta_mapping:
owner:
match: "^@(.*)"
operation: "add_owner"
config:
owner_type: group
business_owner:
match: "^@(?P<owner>(.*))"
operation: "add_owner"
config:
owner_type: user
owner_category: BUSINESS_OWNER

In the examples above, we show two ways of writing the matching regexes. In the first one, ^@(.*) the first matching group (a.k.a. match.group(1)) is automatically inferred. In the second example, ^@(?P<owner>(.*)), we use a named matching group (called owner, since we are matching an owner) to capture the string we want to provide to the ownership urn.

dbt query_tag automated mappings

This works similarly as the dbt meta mapping but for the query tags

We support the below actions -

  1. add_tag - Requires tag property in config.

The below example set as global tag the query tag tag key's value.

"query_tag_mapping":
{
"tag":
"match": ".*"
"operation": "add_tag"
"config":
"tag": "{{ $match }}"
}

Integrating with dbt test

To integrate with dbt tests, the dbt source needs access to the run_results.json file generated after a dbt test execution. Typically, this is written to the target directory. A common pattern you can follow is:

  1. Run dbt docs generate and upload manifest.json and catalog.json to a location accessible to the dbt source (e.g. s3 or local file system)
  2. Run dbt test and upload run_results.json to a location accessible to the dbt source (e.g. s3 or local file system)
  3. Run datahub ingest -c dbt_recipe.dhub.yaml with the following config parameters specified
    • test_results_path: pointing to the run_results.json file that you just created

The connector will produce the following things:

  • Assertion definitions that are attached to the dataset (or datasets)
  • Results from running the tests attached to the timeline of the dataset

View of dbt tests for a dataset

test view

Viewing the SQL for a dbt test

test logic view

Viewing timeline for a failed dbt test

test view

Separating test result emission from other metadata emission

You can segregate emission of test results from the emission of other dbt metadata using the entities_enabled config flag. The following recipe shows you how to emit only test results.

source:
type: dbt
config:
manifest_path: _path_to_manifest_json
catalog_path: _path_to_catalog_json
test_results_path: _path_to_run_results_json
target_platform: postgres
entities_enabled:
test_results: Only

Similarly, the following recipe shows you how to emit everything (i.e. models, sources, seeds, test definitions) but not test results:

source:
type: dbt
config:
manifest_path: _path_to_manifest_json
catalog_path: _path_to_catalog_json
run_results_path: _path_to_run_results_json
target_platform: postgres
entities_enabled:
test_results: No

Code Coordinates

  • Class Name: datahub.ingestion.source.dbt.dbt_core.DBTCoreSource
  • Browse on GitHub

Module dbt-cloud

Incubating

Important Capabilities

CapabilityStatusNotes
Dataset Usage
Detect Deleted EntitiesEnabled via stateful ingestion
Table-Level LineageEnabled by default

This source pulls dbt metadata directly from the dbt Cloud APIs.

You'll need to have a dbt Cloud job set up to run your dbt project, and "Generate docs on run" should be enabled.

The token should have the "read metadata" permission.

To get the required IDs, go to the job details page (this is the one with the "Run History" table), and look at the URL. It should look something like this: https://cloud.getdbt.com/next/deploy/107298/projects/175705/jobs/148094. In this example, the account ID is 107298, the project ID is 175705, and the job ID is 148094.

CLI based Ingestion

Install the Plugin

pip install 'acryl-datahub[dbt-cloud]'

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "dbt-cloud"
config:
token: ${DBT_CLOUD_TOKEN}

# In the URL https://cloud.getdbt.com/next/deploy/107298/projects/175705/jobs/148094,
# 107298 is the account_id, 175705 is the project_id, and 148094 is the job_id

account_id: # set to your dbt cloud account id
project_id: # set to your dbt cloud project id
job_id: # set to your dbt cloud job id
run_id: # set to your dbt cloud run id. This is optional, and defaults to the latest run

target_platform: postgres

# Options
target_platform: "my_target_platform_id" # e.g. bigquery/postgres/etc.

# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

View All Configuration Options
Field [Required]TypeDescriptionDefaultNotes
account_id [✅]integerThe DBT Cloud account ID to use.None
column_meta_mapping [✅]objectmapping rules that will be executed against dbt column meta properties. Refer to the section below on dbt meta automated mappings.None
convert_column_urns_to_lowercase [✅]booleanWhen enabled, converts column URNs to lowercase to ensure cross-platform compatibility. If target_platform is Snowflake, the default is True.None
enable_meta_mapping [✅]booleanWhen enabled, applies the mappings that are defined through the meta_mapping directives.True
enable_owner_extraction [✅]booleanWhen enabled, ownership info will be extracted from the dbt metaTrue
enable_query_tag_mapping [✅]booleanWhen enabled, applies the mappings that are defined through the query_tag_mapping directives.True
include_env_in_assertion_guid [✅]booleanPrior to version 0.9.4.2, the assertion GUIDs did not include the environment. If you're using multiple dbt ingestion that are only distinguished by env, then you should set this flag to True.None
incremental_lineage [✅]booleanWhen enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.None
job_id [✅]integerThe ID of the job to ingest metadata from.None
meta_mapping [✅]objectmapping rules that will be executed against dbt meta properties. Refer to the section below on dbt meta automated mappings.None
metadata_endpoint [✅]stringThe dbt Cloud metadata API endpoint.https://metadata.cloud.getdbt.com/graphql
owner_extraction_pattern [✅]stringRegex string to extract owner from the dbt node using the (?P<name>...) syntax of the match object, where the group name must be owner. Examples: (1)r"(?P<owner>(.*)): (\w+) (\w+)" will extract jdoe as the owner from "jdoe: John Doe" (2) r"@(?P<owner>(.*))" will extract alice as the owner from "@alice".None
platform_instance [✅]stringThe instance of the platform that all assets produced by this recipe belong toNone
project_id [✅]integerThe dbt Cloud project ID to use.None
query_tag_mapping [✅]objectmapping rules that will be executed against dbt query_tag meta properties. Refer to the section below on dbt meta automated mappings.None
run_id [✅]integerThe ID of the run to ingest metadata from. If not specified, we'll default to the latest run.None
sql_parser_use_external_process [✅]booleanWhen enabled, sql parser will run in isolated in a separate process. This can affect processing time but can protect from sql parser's mem leak.None
strip_user_ids_from_email [✅]booleanWhether or not to strip email id while adding owners using dbt meta actions.None
tag_prefix [✅]stringPrefix added to tags during ingestion.dbt:
target_platform [✅]stringThe platform that dbt is loading onto. (e.g. bigquery / redshift / postgres etc.)None
target_platform_instance [✅]stringThe platform instance for the platform that dbt is operating on. Use this if you have multiple instances of the same platform (e.g. redshift) and need to distinguish between them.None
token [✅]stringThe API token to use to authenticate with DBT Cloud.None
use_identifiers [✅]booleanUse model identifier instead of model name if defined (if not, default to model name).None
write_semantics [✅]stringWhether the new tags, terms and owners to be added will override the existing ones added only by this source or not. Value for this config can be "PATCH" or "OVERRIDE"PATCH
env [✅]stringEnvironment to use in namespace when constructing URNs.PROD
entities_enabled [✅]DBTEntitiesEnabledControls for enabling / disabling metadata emission for different dbt entities (models, test definitions, test results, etc.){'models': 'YES', 'sources': 'YES', 'seeds': 'YES', 'snapshots': 'YES', 'test_definitions': 'YES', 'test_results': 'YES'}
entities_enabled.models [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt models when set to Yes or OnlyYES
entities_enabled.seeds [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt seeds when set to Yes or OnlyYES
entities_enabled.snapshots [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt snapshots when set to Yes or OnlyYES
entities_enabled.sources [❓ (required if entities_enabled is set)]EnumEmit metadata for dbt sources when set to Yes or OnlyYES
entities_enabled.test_definitions [❓ (required if entities_enabled is set)]EnumEmit metadata for test definitions when enabled when set to Yes or OnlyYES
entities_enabled.test_results [❓ (required if entities_enabled is set)]EnumEmit metadata for test results when set to Yes or OnlyYES
node_name_pattern [✅]AllowDenyPatternregex patterns for dbt model names to filter in ingestion.{'allow': ['.*'], 'deny': [], 'ignoreCase': True}
node_name_pattern.allow [❓ (required if node_name_pattern is set)]array(string)None
node_name_pattern.deny [❓ (required if node_name_pattern is set)]array(string)None
node_name_pattern.ignoreCase [❓ (required if node_name_pattern is set)]booleanWhether to ignore case sensitivity during pattern matching.True
stateful_ingestion [✅]StatefulStaleMetadataRemovalConfigDBT Stateful Ingestion Config.None
stateful_ingestion.enabled [❓ (required if stateful_ingestion is set)]booleanThe type of the ingestion state provider registered with datahub.None
stateful_ingestion.ignore_new_state [❓ (required if stateful_ingestion is set)]booleanIf set to True, ignores the current checkpoint state.None
stateful_ingestion.ignore_old_state [❓ (required if stateful_ingestion is set)]booleanIf set to True, ignores the previous checkpoint state.None
stateful_ingestion.remove_stale_metadata [❓ (required if stateful_ingestion is set)]booleanSoft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.True

Code Coordinates

  • Class Name: datahub.ingestion.source.dbt.dbt_cloud.DBTCloudSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for dbt, feel free to ping us on our Slack