Skip to content

Databricks

Adding and configuring Databricks connection within Qualytics empowers the platform to build a symbolic link with your database to perform operations like data discovery, visualization, reporting, cataloging, profiling, scanning, anomaly surveillance, and more.

This documentation provides a step-by-step guide on how to add Databricks as both a source and enrichment datastore in Qualytics. It covers the entire process, from initial connection setup to testing and finalizing the configuration.

By following these instructions, enterprises can ensure their Databricks environment is properly connected with Qualytics, unlocking the platform's potential to help you proactively manage your full data quality lifecycle.

Let's get started 🚀

Databricks Setup Guide

This guide provides a comprehensive walkthrough for setting up Databricks. It highlights the distinction between SQL Warehouses and All-Purpose Compute, the functionality of node pools, and the enhancements they offer.

Additionally, it details the process for attaching compute resources to node pools and explains the minimum requirements for effective operation.

Understanding SQL Warehouses and All-Purpose Compute

SQL Warehouses (Serverless)

SQL Warehouses (Serverless) in Databricks utilize serverless SQL endpoints for running SQL queries.

REF ATTRIBUTE DESCRIPTION
1️. Cost-effectiveness Serverless SQL endpoints allow you to pay only for the queries you execute, without the need to provision or manage dedicated infrastructure, making it more cost-effective for ad-hoc or sporadic queries.
2️. Scalability Serverless architectures automatically scale resources based on demand, ensuring optimal performance for varying workloads.
3️. Simplified Management With serverless SQL endpoints, you don't need to manage clusters or infrastructure, reducing operational overhead.
4️. Minimum Requirements The minimum requirements for using SQL Warehouse with serverless typically include access to a Databricks workspace and appropriate permissions to create and run SQL queries.

All-Purpose Compute

All-purpose compute in Databricks refers to clusters that are not optimized for specific tasks. While they offer flexibility, they may not provide the best performance or cost-effectiveness for certain workloads.

REF ATTRIBUTE DESCRIPTION
1️. Slow Spin-up Time All-purpose compute clusters may take longer to spin up compared to specialized clusters, resulting in delays before processing can begin.
2️. Timeout Connections Due to longer spin-up times, there's a risk of timeout connections, especially for applications or services that expect quick responses.

Node Pool and Its Usage

A node pool in Databricks is a set of homogeneous virtual machines (VMs) within a cluster. It allows you to have a fixed set of instances dedicated to specific tasks, ensuring consistent performance and resource isolation.

REF ATTRIBUTE DESCRIPTION
1️. Resource Isolation Node pools provide resource isolation, allowing different workloads or applications to run without impacting each other's performance.
2️. Optimized Performance By dedicating specific nodes to particular tasks, you can optimize performance for those workloads.
3️. Cost-effectiveness Node pools can be more cost-effective than using all-purpose compute for certain workloads, as you can scale resources according to the specific requirements of each task.

Improving All-Purpose Compute with Node Pools

To improve the performance of all-purpose compute using node pools, you can follow these steps:

REF ATTRIBUTE DESCRIPTION
1️. Define Workload-Specific Node Pools Identify the specific tasks or workloads that require optimized performance and create dedicated node pools for them.
2️. Specify Minimum Requirements Determine the minimum resources (such as CPU, memory, and disk) required for each workload and configure the node pools accordingly.
3️. Monitor and Adjust Continuously monitor the performance of your node pools and adjust resource allocations as needed to ensure optimal performance.

Step 1: Configure details for Qualytics Node Pool.

configure-qualytics-node-pool

Step 2: Attach Compute details with the Node Pool.

attach-compute-with-node-pool

Retrieve the Connection Details

This section explains how to retrieve the connection details that you need to connect to Databricks.

Credentials to Connect with Qualytics

To configure Databricks, you need the following credentials:

REF FIELDS ACTIONS
1️. Host (Required) Get Hostname from your Databricks account and add it to this field.
2️. HTTP Path (Required) Add HTTP Path (web address) to fetch data from your Databricks account.
3️. Catalog (Required) Add a Catalog to fetch data structures and metadata from Databricks.
4️. Database (Required) Specify the database name to be accessed.
5️. Personal Access Token (Required) Generate a Personal Access Token from your Databricks account and add it for authentication.

Get Connection Details for the SQL Warehouse

Follow the given steps to get the connection details for the SQL warehouse:

  1. Click on the SQL Warehouses in the sidebar.
  2. Choose a warehouse to connect to.
  3. Navigate to the Connection Details tab.
  4. Copy the connection details.

connection-details-for-sql-warehouse

Get Connection Details for the Cluster

Follow the given steps to get the connection details for the cluster:

  1. Click on the Compute in the sidebar.
  2. Choose a cluster to connect to.
  3. Navigate to the Advanced Options.
  4. Click on the JDBC/ODBC tab.
  5. Copy the connection details.

connection-details-for-the-cluster

Get the Access Token

Step 1: In your Databricks workspace, click your Databricks username in the top bar, and then select User Settings from the drop-down menu.

user-settings

Note

Refer to the Databricks Official Docs to generate the Access Token.

Step 2: In the Settings page, select the Developer option in the User section.

developer-option

Step 3: In the Developer page, click on Manage in Access Tokens.

manage-access-token

Step 4: In the Access Tokens page, click on the Generate new token button.

generate-new-token

Step 5: You will see a modal to add a description and validation time (in days) for the token.

add-a-description

Step 6: After adding the contents, click on Generate, and it will show the token

generated-token

Warning

Before closing the modal window by clicking on the Done button, ensure the Personal Access Token is saved to a secure location

Step 7: You can see the new token on the Access Tokens page.

new-token

You can also revoke a token on the Access Tokens page by clicking on the Revoke token button

revoke-token

Add a Source Datastore

A source datastore is a storage location used to connect to and access data from external sources. Databricks is an example of a source datastore, specifically a type of JDBC datastore that supports connectivity through the JDBC API. Configuring the JDBC datastore enables the Qualytics platform to access and perform operations on the data, thereby generating valuable insights.

Step 1: Log in to your Qualytics account and click on the Add Source Datastore button located at the top-right corner of the interface.

add-datastore add-datastore

Step 2: A modal window- Add Datastore will appear, providing you with the options to connect a datastore.

select-a-connector select-a-connector

REF. FIELDS ACTIONS
1️. Name (Reqired) Specify the datastore name (e.g., This name will appear on the datastore cards)
2️. Toggle Button Toggle ON to create a new source datastore from scratch, or toggle OFF to reuse credentials from an existing connection.
3️. Connector (Required) Select Databricks from the dropdown list.

Option I: Create a Source Datastore with a new Connection

If the toggle for Add new connection is turned on, then this will prompt you to add and configure the source datastore from scratch without using existing connection details.

Step 1: Select the Databricks connector from the dropdown list and add connection details such as host, HTTP path, database, and personal access token.

add-datastore-credentials add-datastore-credentials

Step 2: The configuration form will expand, requesting credential details before establishing the connection.

add-datastore-credentials-explain add-datastore-credentials-explain

REF. FIELD ACTIONS
1️. Host (Required) Get the hostname from your Databricks account and add it to this field.
2️. HTTP Path (Required) Add the HTTP Path (web address) to fetch data from your Databricks account.
3️. Personal Access Token (Required) Generate a Personal Access Token from your Databricks account and add it for authentication.
4️. Catalog (Required) Add a Catalog to fetch data structures and metadata from the Databricks.
5️. Database (Optional) Specify the database name to be accessed.
6️. Teams (Required) Select one or more teams from the dropdown to associate with this source datastore.
7️. Initiate Cataloging (Optional) Tick the checkbox to automatically perform catalog operation on the configured source datastore to gather data structures and corresponding metadata.

Step 3: After adding the source datastore details, click on the Test Connection button to check and verify its connection.

test-datastore-connection test-datastore-connection

If the credentials and provided details are verified, a success message will be displayed indicating that the connection has been verified.

Option II: Use an Existing Connection

If the toggle for Add new connection is turned off, then this will prompt you to configure the source datastore using the existing connection details.

Step 1: Select a connection to reuse existing credentials.

use-existing-datastore use-existing-datastore

Note

If you are using existing credentials, you can only edit the details such as Catalog, Database, Teams, and Initiate Cataloging.

Step 2: Click on the Test Connection button to verify the existing connection details. If connection details are verified, a success message will be displayed.

test-connection-for-existing-datastore test-connection-for-existing-datastore

Note

Clicking on the Finish button will create the source datastore and bypass the enrichment datastore configuration step.

Tip

It is recommended to click on the Next button, which will take you to the enrichment datastore configuration page.

Add Enrichment Datastore

Once you have successfully tested and verified your source datastore connection, you have the option to add the enrichment datastore (recommended). The enrichment datastore is used to store the analyzed results, including any anomalies and additional metadata in tables. This setup provides full visibility into your data quality, helping you manage and improve it effectively.

Step 1: Whether you have added a source datastore by creating a new datastore connection or using an existing connection, click on the Next button to start adding the Enrichment Datastore.

next-button-for-enrichment next-button-for-enrichment

Step 2: A modal window- Add Enrichment Datastore will appear, providing you with the options to configure an enrichment datastore.

select-enrichment-connector select-enrichment-connector

REF. FIELDS ACTIONS
1️. Prefix (Required) Add a prefix name to uniquely identify tables/files when Qualytics writes metadata from the source datastore to your enrichment datastore.
2️. Toggle Button for existing enrichment datastore Toggle ON to link the source datastore to an existing enrichment datastore; Toggle OFF to link it to a brand new enrichment datastore.
3️. Name (Required) Give a name for the enrichment datastore.
4️. Toggle Button for add new connection Toggle ON to create a new enrichment from scratch; Toggle OFF to reuse credentials from an existing connection.
5️. Connector (Required) Select a datastore connector as Databricks from the dropdown list.

Option I: Create an Enrichment Datastore with a new Connection

If the toggles for Use an existing enrichment datastore is off and Use an existing connection is turned on, then this will prompt you to add and configure the enrichment datastore from scratch without using an existing enrichment datastore and its connection details.

Step 1: Add connection details for your selected enrichment datastore connector.

enrichment-datastore-explain enrichment-datastore-explain

REF. FIELD ACTIONS
1️. Host (Required) Get the hostname from your Databricks account and add it to this field.
2️. HTTP Path (Required) Add the HTTP Path (web address) to fetch data from your Databricks account.
3️. Personal Access Token (Required) Generate a Personal Access Token from your Databricks account and add it for authentication.
4️. Catalog (Required) Add a Catalog to fetch data structures and metadata from Databricks.
5️. Database (Optional) Specify the database name
6️. Teams (Required) Select one or more teams from the dropdown to associate with this enrichment datastore.

Step 2: Click on the Test Connection button to verify the selected enrichment datastore connection. If the connection is verified, a flash message will indicate that the connection with the enrichment datastore has been successfully verified.

test-connection-for-enrichment-datastore test-connection-for-enrichment-datastore

Step 3: Click on the Finish button to complete the configuration process.

finish-configuration finish-configuration

When the configuration process is finished, a modal will display a success message indicating that your datastore has been successfully added.

success-message success-message

Step 4: Close the Success dialog and the page will automatically redirect you to the Source Datastore Details page where you can perform data operations on your configured source datastore.

data-operation-page data-operation-page

Option II: Use an Existing Connection

If the toggle for Use an existing enrichment datastore is turned on, you will be prompted to configure the datastore using existing connection details.

Step 1: Add a prefix name and select an existing enrichment datastore from the dropdown list.

select-existing-enrichment-datastore select-existing-enrichment-datastore

REF. FIELDS ACTIONS
1️. Prefix (Required) Add a prefix name to uniquely identify tables/files when Qualytics writes metadata.
2️. Toggle Button for existing enrichment datastore Toggle ON to link the source datastore to an existing enrichment datastore.
3️. Enrichment Datastore Select an enrichment datastore from the dropdown list.

Step 2: After selecting an existing enrichment datastore connection, you will view the following details related to the selected enrichment:

  • Teams: The team associated with managing the enrichment datastore is based on the role of public or private. Example- Marked as Public means that this datastore is accessible to all the users.

  • Host: This is the server address where the Databricks instance is hosted. It is the endpoint used to connect to the Databricks environment.

  • Database: Refers to the specific database within the Databricks environment where the data is stored.

  • Schema: The schema used in the enrichment datastore. The schema is a logical grouping of database objects (tables, views, etc.). Each schema belongs to a single database.

use-existing-enrichment-datastore use-existing-enrichment-datastore

Step 3: Click on the Finish button to complete the configuration process for the existing enrichment datastore.

finish-configuration-for-existing-enrichment-datastore finish-configuration-for-existing-enrichment-datastore

When the configuration process is finished, a modal will display a success message indicating that your data has been successfully added.

success-message success-message

Close the success message and you will be automatically redirected to the Source Datastore Details page where you can perform data operations on your configured source datastore.

data-operation-page data-operation-page

API Payload Examples

This section provides detailed examples of API payloads to guide you through the process of creating and managing datastores using Qualytics API. Each example includes endpoint details, sample payloads, and instructions on how to replace placeholder values with actual data relevant to your setup.

Creating a Source Datastore

This section provides sample payloads for creating a Databricks datastore. Replace the placeholder values with actual data relevant to your setup.

Endpoint: /api/datastores (post)

{
    "name": "your_datastore_name",
    "teams": ["Public"],
    "database": "databricks_database",
    "schema": "databricks_catalog",
    "enrich_only": false,
    "trigger_catalog": true,
    "connection": {
        "name": "your_connection_name",
        "type": "databricks",
        "host": "databricks_host",
        "password": "databricks_token",
        "parameters": {
            "path": "databricks_http_path"
        }
    }
}
{
    "name": "your_datastore_name",
    "teams": ["Public"],
    "database": "databricks_database",
    "schema": "databricks_catalog",
    "enrich_only": false,
    "trigger_catalog": true,
    "connection_id": connection-id
}

Creating an Enrichment Datastore

This section provides sample payloads for creating an enrichment datastore. Replace the placeholder values with actual data relevant to your setup.

Endpoint: /api/datastores (post)

{
    "name": "your_datastore_name",
    "teams": ["Public"],
    "database": "databricks_database",
    "schema": "databricks_enrichment_catalog",
    "enrich_only": true,
    "connection": {
        "name": "your_connection_name",
        "type": "databricks",
        "host": "databricks_host",
        "password": "databricks_token",
        "parameters": {
            "path": "databricks_http_path"
        }
    }
}
{
    "name": "your_datastore_name",
    "teams": ["Public"],
    "database": "databricks_database",
    "schema": "databricks_enrichment_catalog",
    "enrich_only": true,
    "connection_id": connection-id
}

Use the provided endpoint to link an enrichment datastore to a source datastore:

Endpoint Details: /api/datastores/{datastore-id}/enrichment/{enrichment-id} (patch)