Metric
Definition
Records the value of the selected field during each scan operation and asserts limits based upon an expected change or absolute range (inclusive).
In-Depth Overview
The Metric
rule is designed to monitor the values of a selected field over time. It is particularly useful in a time-series context where values are expected to evolve within certain bounds or limits. This rule allows for tracking absolute values or changes, ensuring they remain within predefined thresholds.
Field Scope
Single: The rule evaluates a single specified field.
Accepted Types
Type | |
---|---|
Integral |
|
Fractional |
General Properties
Name | Supported |
---|---|
Filter Allows the targeting of specific data based on conditions |
|
Coverage Customization Allows adjusting the percentage of records that must meet the rule's conditions |
The filter allows you to define a subset of data upon which the rule will operate.
It requires a valid Spark SQL expression that determines the criteria rows in the DataFrame should meet. This means the expression specifies which rows the DataFrame should include based on those criteria. Since it's applied directly to the Spark DataFrame, traditional SQL constructs like WHERE clauses are not supported.
Examples
Direct Conditions
Simply specify the condition you want to be met.
Combining Conditions
Combine multiple conditions using logical operators like AND
and OR
.
Correct usage
Incorrect usage
Utilizing Functions
Leverage Spark SQL functions to refine and enhance your conditions.
Correct usage
Incorrect usage
Using scan-time variables
To refer to the current dataframe being analyzed, use the reserved dynamic variable {{ _qualytics_self }}
.
Correct usage
Incorrect usage
While subqueries can be useful, their application within filters in our context has limitations. For example, directly referencing other containers or the broader target container in such subqueries is not supported. Attempting to do so will result in an error.
Important Note on {{ _qualytics_self }}
The {{ _qualytics_self }}
keyword refers to the dataframe that's currently under examination. In the context of a full scan, this variable represents the entire target container. However, during incremental scans, it only reflects a subset of the target container, capturing just the incremental data. It's crucial to recognize that in such scenarios, using {{ _qualytics_self }}
may not encompass all entries from the target container.
Specific Properties
Determines the evaluation method and allowable limits for field value comparisons over time.
Name | Description |
---|---|
Comparison |
Specifies the type of comparison: Absolute Change, Absolute Value, or Percentage Change. |
Min Value |
Indicates the minimum allowable increase in value. Use a negative value to represent an allowable decrease. |
Max Value |
Indicates the maximum allowable increase in value. |
Details
Comparison Options
Absolute Change
The Absolute Change
comparison works by comparing the change in a numeric field's value to a pre-set limit (Min / Max). If the field's value changes by more than this specified limit since the last relevant scan, an anomaly is identified.
Illustration
Any record with a value change smaller than 30 or greater than 70 compared to the last scan should be flagged as anomalous
Thresholds: Min Change = 30, Max Change = 70
Scan | Previous Value | Current Value | Absolute Change | Anomaly Detected |
---|---|---|---|---|
#1 | - | 100 | - | No |
#2 | 100 | 150 | 50 | No |
#3 | 150 | 220 | 70 | No |
#4 | 220 | 300 |
80 |
Yes |
Absolute Value
The Absolute Value
comparison works by comparing the change in a numeric field's value to a pre-set limit between
Min and Max values. If the field's value changes by more than this specified range since the last relevant scan, an anomaly is identified.
Illustration
The value of the record in each scan should be within 100 and 300 to be considered normal
Thresholds: Min Value = 100, Max Value = 300
Scan | Current Value | Anomaly Detected |
---|---|---|
#1 | 150 | No |
#2 | 90 |
Yes |
#3 | 250 | No |
#4 | 310 |
Yes |
Percentage Change
The Percentage Change
comparison operates by tracking changes in a numeric field's value relative to its previous value. If the change exceeds the predefined percentage (%) limit since the last relevant scan, an anomaly is generated.
Illustration
An anomaly is identified if the record's value decreases by more than 20% or increases by more than 50% compared to the last scan.
Thresholds: Min Percentage Change = -20%, Max Percentage Change = 50%
Percentage Change Formula: ( (current_value - previous_value) / previous_value ) * 100
Scan | Previous Value | Current Value | Percentage Change | Anomaly Detected |
---|---|---|---|---|
1 | - | 100 | - | No |
2 | 100 | 150 | 50% | No |
3 | 150 | 120 | -20% | No |
4 | 120 | 65 | -45.83% |
Yes |
5 | 65 | 110 | 69.23% |
Yes |
Thresholds
At least the Min or Max value must be specified, and including both is optional. These values determine the acceptable range or limit of change in the field's value.
Min Value
- Represents the minimum allowable increase in the field's value.
- A negative Min Value signifies an allowable decrease, determining the minimum value the field can drop to be considered valid.
Max Value
- Indicates the maximum allowable increase in the field’s value, setting an upper limit for the value's acceptable growth or change.
Anomaly Types
Type | Supported |
---|---|
Record Flag inconsistencies at the row level |
|
Shape Flag inconsistencies in the overall patterns and distributions of a field |
Example
Objective: Ensure that the total price in the ORDERS table does not fluctuate beyond a predefined percentage limit between scans.
Thresholds: Min Percentage Change = -30%, Max Percentage Change = 30%
Sample Scan History
Scan | O_ORDERKEY | Previous O_TOTALPRICE | Current O_TOTALPRICE | Percentage Change | Anomaly Detected |
---|---|---|---|---|---|
#1 | 1 | - | 100 | - | No |
#2 | 1 | 100 | 110 | 10% | No |
#3 | 1 | 110 | 200 | 81.8% | Yes |
#4 | 1 | 200 | 105 | -47.5% | Yes |
{
"description": "Ensure that the total price in the ORDERS table does not fluctuate beyond a predefined percentage limit between scans",
"coverage": 1,
"properties": {
"comparison":"Percentage Change",
"min":-0.3,
"max":0.3
},
"tags": [],
"fields": ["O_TOTALPRICE "],
"additional_metadata": {"key 1": "value 1", "key 2": "value 2"},
"rule": "metric",
"container_id": {container_id},
"template_id": {template_id},
"filter": "1=1"
}
Anomaly Explanation
In the sample scan history above, anomalies are identified in scans #3 and #4. The O_TOTALPRICE
values in these scans fall outside the declared percentage change limits of -30% and 30%, indicating that something unusual might be happening and further investigation is needed.
graph TD
A[Start] --> B[Retrieve O_TOTALPRICE]
B --> C{Is Percentage Change in O_TOTALPRICE within -30% and 30%?}
C -->|Yes| D[End]
C -->|No| E[Mark as Anomalous]
E --> D
-- An illustrative SQL query demonstrating the rule applied to example dataset(s)
select
o_orderkey,
o_totalprice,
lag(o_totalprice) over (order by o_orderkey) as previous_o_totalprice
from
orders
having
abs((o_totalprice - previous_o_totalprice) / previous_o_totalprice) * 100 > 30
or
abs((o_totalprice - previous_o_totalprice) / previous_o_totalprice) * 100 < -30;
Potential Violation Messages
Record Anomaly (Percentage Change)
The percentage change of O_TOTALPRICE
from '110' to '200' falls outside the declared limits
Record Anomaly (Absolute Change)
using hypothetical numbers
The absolute change of O_TOTALPRICE
from '150' to '300' falls outside the declared limits
Record Anomaly (Absolute Value)
using hypothetical numbers
The value for O_TOTALPRICE
of '50' is not between the declared limits