Use Cases for Data Engineers
This page describes concrete scenarios where data engineering teams use Promote to scale data quality across the data stack. Each scenario covers the situation, the entities being promoted, the workflow, and the operational outcome.
Onboarding a New Tenant in a Multi-Tenant Platform
Context: Your platform hosts a separate schema (or database) per customer, each with identical table structures. When a new customer is provisioned, every quality check and computed field that exists on the "template" tenant must be replicated to the new tenant.
Without Promote: A data engineer manually recreates dozens of checks and computed fields per onboarding — error-prone, time-consuming, and prone to drift over time.
With Promote:
- Run the tenant provisioning pipeline that creates the new schema and tables.
- From a template container, run Promote > Quality Checks to the new tenant's container. Filter by a
tenant-templatetag to select only the baseline rules. - Run Promote > Computed Fields to replicate normalization and derived field definitions.
- When the template changes later, re-promote — Promote finds each tenant's existing entities and updates them in place instead of creating duplicates.
Outcome: Tenant onboarding ships data quality baseline by default. No drift between tenants, no manual replication.
Dev → Staging → Prod Rollout
Context: Quality rules are authored and refined in a development environment. Before they go live, the team needs a controlled rollout to staging, then production — with explicit review gates at each step.
With Promote:
- Author or update checks in the development container.
- Tag the batch ready for promotion (e.g.,
release-2026-Q2). - Run Promote > Quality Checks to the staging container with Promote As Draft enabled — checks land inactive, ready for validation.
- Validate the check definitions and field bindings on staging; activate them and run a scan to confirm behavior.
- Re-promote from staging (or directly from dev) to production. Land as Draft again, then activate after a final review.
Outcome: Changes flow through environments with explicit checkpoints. Zero risk of accidentally running unvalidated checks against production data.
Migrating to a New Data Warehouse
Context: The team is migrating from one JDBC warehouse to another — warehouse consolidation, vendor change, or region migration. Every computed table on the legacy warehouse must exist on the new one with identical SQL definitions and tracking settings (volumetric, freshness).
With Promote:
- Provision the new warehouse as a Qualytics datastore.
- From the legacy datastore, run Promote > Computed Tables. Select all tables (or filter by tag for a phased migration).
- Choose the new JDBC datastore as the destination.
- Qualytics replicates the SQL definitions and runs a full profile on each promoted table.
- Validate the profile metrics on the new warehouse match the legacy environment.
- Once parity is confirmed, switch downstream consumers and decommission the legacy datastore.
Outcome: Computed table migration in hours instead of weeks of SQL copy/paste. Definitions stay identical between source and target during cutover.
Replicating Quality Rules Across Schema Versions
Context: A breaking schema change (e.g., orders → orders_v2) introduces a new table with mostly compatible structure. The data quality coverage on orders should carry over to orders_v2.
With Promote:
- Confirm that
orders_v2has fields with the same names as those referenced by the existing checks and computed fields. - Run Promote > Quality Checks from
orderstoorders_v2. - Run Promote > Computed Fields to replicate normalization logic.
- For fields renamed in v2 (e.g.,
customer_id→customer_uuid), the affected checks land as Draft with a warning — fix the field references manually, then activate.
Outcome: Quality coverage stays continuous across schema migrations. Engineers focus on the fields that actually changed, not on rebuilding hundreds of identical checks.
Cross-Region Replication for Compliance
Context: The same business dataset is replicated to multiple regions for data residency or compliance (e.g., customers_us, customers_eu, customers_apac). Every regional instance must enforce a common quality baseline, with selective region-specific extensions.
With Promote:
- Author the canonical quality checks on a reference container (e.g.,
customers_us). - Run Promote > Quality Checks to
customers_euandcustomers_apac. - When regulations require additional checks (e.g., GDPR-specific email validation), tag them
eu-onlyand promote selectively to the EU container. - Re-promote whenever the canonical rules change — the existing checks in each region are found and updated in place instead of being duplicated.
Outcome: Uniform baseline across regions, with tagged carve-outs for region-specific compliance rules.
Standardizing Transformations Across DFS Datastores
Context: Source files arrive in multiple object stores (e.g., production S3, staging S3, archive on Azure Data Lake). Each environment runs the same set of computed files that transform raw inputs into curated datasets, but the mapping between source file patterns differs per environment.
With Promote:
- Author computed files in the production DFS datastore.
- Run Promote > Computed Files to the staging datastore. Use Auto-match to map source file patterns automatically — review and adjust suggestions before promoting.
- Repeat for the Azure Data Lake environment.
- Qualytics runs a profile on each created computed file to validate the transformation executes correctly in the new environment.
Outcome: Same business logic everywhere, even when underlying storage technologies and naming conventions differ.
Medallion Architecture (Bronze → Silver → Gold)
Context: Your data lake follows a layered architecture — Bronze (raw), Silver (cleaned/conformed), Gold (business-ready). Quality checks defined on Silver tables (e.g., not-null on key fields, referential integrity) should carry over to the corresponding Gold tables that aggregate from them.
With Promote:
- Define authoritative quality checks on Silver tables.
- Run Promote > Quality Checks to the Gold tables that share matching field names (or use computed fields to expose them).
- Run Promote > Computed Fields to replicate any normalization logic that needs to live in Gold too.
- Re-promote whenever Silver definitions evolve — Gold stays in sync without manual edits.
Outcome: Quality guarantees propagate across the medallion stack. Downstream consumers of Gold get the same validation rigor enforced upstream, without rebuilding checks at each layer.
Disaster Recovery / Failover Synchronization
Context: A secondary environment exists for disaster recovery — same schemas, replicated data via the database vendor's tooling. The DR environment must enforce the same quality rules as primary so failover doesn't degrade quality monitoring.
With Promote:
- Schedule a periodic re-promote from the primary container to the DR container (e.g., via the API after each release).
- Use the same selection (tag-based) so any new check authored in primary lands in DR automatically.
- Verify with the Activity page that the latest re-promote completed successfully.
- During a failover drill, confirm DR quality checks fire as expected against replicated data.
Outcome: DR isn't just a data replica — it's a fully observable replica with parity in quality coverage. Failover is safe to execute without sacrificing data quality visibility.
Rolling Out a New Compliance Regulation
Context: A new regulation (GDPR, HIPAA, SOX, PCI-DSS) requires a specific set of quality checks across every table that holds regulated data (e.g., every container holding PII, PHI, financial transactions).
With Promote:
- Author the regulation-specific checks on a single reference container — e.g., one canonical
customerstable withpii-not-null,pii-format,pii-mask-appliedrules. - Tag all of them with the regulation label (e.g.,
gdpr-2026). - Run Promote > Quality Checks filtered by that tag to every container in scope.
- Track coverage via the Activity page — each promotion entry shows which containers received the rules.
- Re-promote whenever the regulation interpretation changes.
Outcome: Compliance rollout in days, not quarters. A single audit-trail entry per container proves where the regulated checks were applied and when.
Standardizing Reference / Dimension Tables
Context: Dimension tables (e.g., dim_customer, dim_product, dim_date) appear repeatedly across data marts and analytics layers. Each instance needs the same baseline quality — unique primary keys, referential integrity, expected enumerations on status columns.
With Promote:
- Establish a master dimension container with the full set of dimension-table checks (uniqueness, referential, enum constraints).
- For every new data mart that includes the same dimension table, run Promote > Quality Checks to copy the rules over.
- When the dimension model evolves (new status value, additional natural key), update the master and re-promote — every mart absorbs the change at once.
Outcome: Dimensional data has consistent quality guarantees across every mart and analytics workspace. New marts inherit the baseline by default rather than implementing it from scratch.
Tips for Reliable Promotion
- Tag aggressively: Tags like
release-2026-Q2,pii-checks, ortenant-templatelet you promote logical groupings instead of hand-picking entities every time. - Use Draft mode for quality checks in higher environments. Activate only after explicit review.
- Re-promote, don't reauthor: When the source changes, re-promote with the same destination. The existing entity is found and updated in place rather than being duplicated.
- Verify field-name parity at the destination before promoting computed fields or checks. Renamed fields are the most common cause of partial failures.
- Monitor the Activity page: It's the single pane for tracking every promotion across the platform.