Unique
Definition
Asserts that every value held by a field appears only once. If multiple fields are specified, then every combination of values of the fields should appear only once.
Field Scope
Multi: The rule evaluates multiple specified fields.
Accepted Types
Type | |
---|---|
Date |
|
Timestamp |
|
Integral |
|
Fractional |
|
String |
|
Boolean |
General Properties
Name | Supported |
---|---|
Filter Allows the targeting of specific data based on conditions |
|
Coverage Customization Allows adjusting the percentage of records that must meet the rule's conditions |
The filter allows you to define a subset of data upon which the rule will operate.
It requires a valid Spark SQL expression that determines the criteria rows in the DataFrame should meet. This means the expression specifies which rows the DataFrame should include based on those criteria. Since it's applied directly to the Spark DataFrame, traditional SQL constructs like WHERE clauses are not supported.
Examples
Direct Conditions
Simply specify the condition you want to be met.
Combining Conditions
Combine multiple conditions using logical operators like AND
and OR
.
Correct usage
Incorrect usage
Utilizing Functions
Leverage Spark SQL functions to refine and enhance your conditions.
Correct usage
Incorrect usage
Using scan-time variables
To refer to the current dataframe being analyzed, use the reserved dynamic variable {{ _qualytics_self }}
.
Correct usage
Incorrect usage
While subqueries can be useful, their application within filters in our context has limitations. For example, directly referencing other containers or the broader target container in such subqueries is not supported. Attempting to do so will result in an error.
Important Note on {{ _qualytics_self }}
The {{ _qualytics_self }}
keyword refers to the dataframe that's currently under examination. In the context of a full scan, this variable represents the entire target container. However, during incremental scans, it only reflects a subset of the target container, capturing just the incremental data. It's crucial to recognize that in such scenarios, using {{ _qualytics_self }}
may not encompass all entries from the target container.
Anomaly Types
Type | Supported |
---|---|
Record Flag inconsistencies at the row level |
|
Shape Flag inconsistencies in the overall patterns and distributions of a field |
Example
Objective: Ensure that each combination of C_NAME and C_ADDRESS in the CUSTOMER table is unique.
Sample Data
C_CUSTKEY | C_NAME | C_ADDRESS |
---|---|---|
1 | Customer_A | 123 Main St |
2 | Customer_B | 456 Oak Ave |
3 | Customer_A | 123 Main St |
4 | Customer_C | 789 Elm St |
{
"description": "Ensure that each combination of C_NAME and C_ADDRESS in the CUSTOMER table is unique",
"coverage": 1,
"properties": null,
"tags": [],
"fields": ["C_NAME", "C_ADDRESS"],
"additional_metadata": {"key 1": "value 1", "key 2": "value 2"},
"rule": "unique",
"container_id": {container_id},
"template_id": {template_id},
"filter": "1=1"
}
Anomaly Explanation
In the sample data above, the entries with C_CUSTKEY
1 and 3 have the same C_NAME
and C_ADDRESS
, which violates the rule because this combination of keys should be unique.
graph TD
A[Start] --> B[Retrieve C_NAME and C_ADDRESS]
B --> C{Is the combination unique?}
C -->|Yes| D[Move to Next Record/End]
C -->|No| E[Mark as Anomalous]
E --> D
Potential Violation Messages
Shape Anomaly
In C_NAME
and C_ADDRESS
, 25.000% of 4 filtered records (1) are not unique.