User Defined Function
Definition
Asserts that the given user-defined function (as scala script) evaluates to true over the field's value.
In-Depth Overview
The User Defined Function
rule enables the application of a custom Scala function on a specified field, allowing for highly customizable and flexible validation based on user-defined logic.
Field Scope
Single: The rule evaluates a single specified field.
Accepted Types
Type | |
---|---|
String |
General Properties
Name | Supported |
---|---|
Filter Allows the targeting of specific data based on conditions |
|
Coverage Customization Allows adjusting the percentage of records that must meet the rule's conditions |
The filter allows you to define a subset of data upon which the rule will operate.
It requires a valid Spark SQL expression that determines the criteria rows in the DataFrame should meet. This means the expression specifies which rows the DataFrame should include based on those criteria. Since it's applied directly to the Spark DataFrame, traditional SQL constructs like WHERE clauses are not supported.
Examples
Direct Conditions
Simply specify the condition you want to be met.
Combining Conditions
Combine multiple conditions using logical operators like AND
and OR
.
Correct usage
Incorrect usage
Utilizing Functions
Leverage Spark SQL functions to refine and enhance your conditions.
Correct usage
Incorrect usage
Using scan-time variables
To refer to the current dataframe being analyzed, use the reserved dynamic variable {{ _qualytics_self }}
.
Correct usage
Incorrect usage
While subqueries can be useful, their application within filters in our context has limitations. For example, directly referencing other containers or the broader target container in such subqueries is not supported. Attempting to do so will result in an error.
Important Note on {{ _qualytics_self }}
The {{ _qualytics_self }}
keyword refers to the dataframe that's currently under examination. In the context of a full scan, this variable represents the entire target container. However, during incremental scans, it only reflects a subset of the target container, capturing just the incremental data. It's crucial to recognize that in such scenarios, using {{ _qualytics_self }}
may not encompass all entries from the target container.
Specific Properties
Implements a user-defined scala script.
Name | Description |
---|---|
Scala Script |
The custom scala script to evaluate each record. |
Note
The Scala script must contain a function that should return a boolean value, determining the validity of the record based on the field's value.
Below is a scaffold to guide the creation of the Scala function:
Anomaly Types
Type | Supported |
---|---|
Record Flag inconsistencies at the row level |
|
Shape Flag inconsistencies in the overall patterns and distributions of a field |
Example
Objective: Validate that each record in the LINEITEM table has a well-structured JSON in the L_ATTRIBUTES column by ensuring the presence of essential keys: "color", "weight", and "dimensions".
Sample Data
L_ORDERKEY | L_LINENUMBER | L_ATTRIBUTES |
---|---|---|
1 | 1 | {"color": "red", "weight": 15, "dimensions": "10x20x15"} |
2 | 2 | {"color": "blue", "weight": 20} |
3 | 1 | {"color": "green", "dimensions": "5x5x5"} |
4 | 3 | {"weight": 10, "dimensions": "20x20x20"} |
Inputs
Scala Script
(lAttributes: String) => {
import play.api.libs.json._
try {
val json = Json.parse(lAttributes)
// Define the keys we expect to find in the JSON
val expectedKeys = List("color", "weight", "dimensions")
// Check if the expected keys are present in the JSON
expectedKeys.forall(key => (json \ key).toOption.isDefined)
} catch {
case e: Exception => false // Return false if parsing fails
}
}
{
"description": "Validate that each record in the LINEITEM table has a well-structured JSON in the L_ATTRIBUTES column by ensuring the presence of essential keys: "color", "weight", and "dimensions"",
"coverage": 1,
"properties": {"assertion":"(lAttributes: String) => {\n import play.api.libs.json._\n\n try {\n val json = Json.parse(lAttributes)\n \n // Define the keys we expect to find in the JSON\n val expectedKeys = List(\"color\", \"weight\", \"dimensions\")\n \n // Check if the expected keys are present in the JSON\n expectedKeys.forall(key => (json \\ key).toOption.isDefined)\n } catch {\n case e: Exception => false // Return false if parsing fails\n }\n }"},
"tags": [],
"fields": ["L_ATTRIBUTES"],
"additional_metadata": {"key 1": "value 1", "key 2": "value 2"},
"rule": "userDefinedFunction",
"container_id": {container_id},
"template_id": {template_id},
"filter": "1=1"
}
Anomaly Explanation
In the sample data above, the entries with L_ORDERKEY
2, 3, and 4 do not satisfy the rule because they lack at least one of the essential keys ("color", "weight", "dimensions") in the L_ATTRIBUTES column.
graph TD
A[Start] --> B[Retrieve L_ATTRIBUTES]
B --> C{Does L_ATTRIBUTES contain all essential keys?}
C -->|Yes| D[Move to Next Record/End]
C -->|No| E[Mark as Anomalous]
E --> D
Potential Violation Messages
Record Anomaly
The L_ATTRIBUTES
value of {"color": "blue", "weight": 20}
does not evaluate true as a parameter to the given UDF.
Shape Anomaly
In L_ATTRIBUTES
, 75.000% of 4 filtered records (3) do not evaluate true as a parameter to the given UDF.