Agent Q in Action
Qualytics implements the Model Context Protocol (MCP) as a server that exposes its entire data quality infrastructure as callable tools. This page covers the endpoint, authentication, available capabilities, and example conversations — everything you need to understand how Agent Q and external MCP clients interact with the Qualytics platform.
Info
Agent Q enforces limits on rate, tokens, timeouts, and query types to ensure fair usage and platform stability. See Agent Q Limits for full details.
Endpoint
The MCP service is available at your Qualytics instance URL:
Authentication
The MCP service uses the same authentication mechanism as the Qualytics API. You'll need a Personal API Token (PAT) to authenticate requests. Include it in the Authorization header:
To generate a token, navigate to Settings > Tokens and click Generate Token. For detailed instructions, see Tokens.
Capabilities
This video demonstrates the power of one-shot prompting using the Qualytics MCP server — a single natural-language prompt instructs the model to join data across two different datastores (Databricks and BigQuery), aggregate customer spending on a monthly basis, and author a quality check on the result, all without specifying technical details like join keys, rule types, or field mappings.
Datastore Exploration
When you connect an AI assistant to Qualytics via MCP, it gains the ability to explore your data landscape and understand the structure of your datastores:
- "What tables are in our sales database?"
- "Show me the schema for the customer_orders table"
- "What fields are available in the transactions container?"
Data Transformations
Create computed assets through conversation instead of manually configuring them in the UI:
- Computed Tables (JDBC) — SQL queries stored and executed in the source database using its native dialect.
- Computed Files (DFS) — Spark SQL transformations over file-based sources like S3, ADLS, or GCS.
- Cross-Datastore Joins — Join data across completely different systems (Snowflake + PostgreSQL, BigQuery + S3) executed in Spark.
- Computed Fields — Add derived or type-cast fields to existing containers using custom expressions.
Quality Check Management
Create and manage data quality checks through natural conversation:
- "Make sure the email field in the customers table is never null"
- "Add a check that order_total is always between 0 and 1,000,000"
- "Verify that ship_date is always after order_date"
The AI translates your intent into the appropriate rule type and parameters automatically.
Automating Controls from Regulatory Documents
Agent Q can parse regulatory publications such as BCBS 239 (Principles for effective risk data aggregation and risk reporting), analyze their applicability to a specific datastore, and automatically create tagged quality checks — preserving full traceability back to the original requirement.
Anomaly Investigation
Investigate quality issues conversationally:
- "Tell me about the anomalies found in yesterday's scan"
- "What's wrong with anomaly 12345?"
- "Explain the business impact of the data quality issues in the orders table"
Operations, Notifications, and Ticketing
Beyond analysis, Agent Q can take action on your behalf:
- Run operations — Trigger sync, profile, scan, export, or materialize operations and poll for completion.
- Promote assets — Copy computed tables, computed files, computed fields, and quality checks across datastores from chat.
- Send notifications — Post alerts to Slack, Microsoft Teams, Email, Webhook, or PagerDuty.
- Create tickets — Open issues in Jira or ServiceNow, optionally linked to a specific anomaly.
- Manage tags — Add, remove, or replace tags on datastores, containers, fields, and quality checks.
Sync vs. Catalog terminology
The discover-and-register operation has been renamed from Catalog to Sync. Agent Q defaults to "Sync" in its responses. If you refer to the operation as "Catalog", Agent Q mirrors your wording and replies in the same terms. Both run_sync and run_catalog perform the same action — run_sync is preferred unless your prompt explicitly uses "catalog".
Tool Step Labels
When Agent Q processes a request, each action appears as an expandable step in the response. The labels correspond to specific platform actions:
| Step Label | What It Does |
|---|---|
| Search | Queries across datastores, containers, and fields |
| List Quality Checks | Retrieves existing checks on a container |
| List Check Specifications | Fetches available rule types and their schemas |
| Create Quality Check | Creates a new quality check rule |
| Update Quality Check | Modifies an existing quality check |
| Quality Scores | Retrieves 8-dimension quality scores |
| Get Insights | Retrieves daily metrics time series |
| Operation Insights | Retrieves historical operation data |
| Describe Anomaly | Gets full details for a specific anomaly |
| Workflow | Executes a guided multi-step workflow |
Available Tools
Agent Q and external MCP clients share the same tool set. Every tool is available on every session.
The list below is a high-level index of capabilities. For exact parameter schemas (names, types, required vs optional), query the live MCP /tools endpoint or refer to the Agent Q API reference.
Exploration
| Tool | Description |
|---|---|
list_datastores |
List datastores with optional name and tag filters. |
list_containers |
List containers within a datastore with optional name filtering. |
list_fields |
List fields within a container, including profile metadata (min, max, null count, distinct count). |
global_search |
Search across all datastores, containers, and fields by name. Returns ranked matches. |
preview_query |
Run a SELECT query against a JDBC datastore and return the results as a markdown table. INSERT/UPDATE/DELETE/DDL are blocked. |
Quality Checks
| Tool | Description |
|---|---|
list_quality_check_specs |
List available quality check rule types and their JSON schemas. |
list_quality_checks |
List quality checks defined on a container. |
create_quality_check |
Create a new quality check rule. |
update_quality_check |
Update an existing quality check. |
Data Transformation
| Tool | Description |
|---|---|
create_computed_table |
Create a computed table in a JDBC datastore. |
create_computed_file |
Create a computed file in a DFS datastore (S3, ADLS, GCS). |
create_computed_join |
Create a cross-datastore join executed in Spark. |
create_computed_field |
Add a derived or type-cast field to an existing container. |
Anomalies
| Tool | Description |
|---|---|
list_anomalies |
List anomalies, filterable by datastore, container, and status. |
anomaly_describe |
Get full details for a single anomaly, including the failed check, affected field, sample values, and AI-generated description. |
Insights & Scores
| Tool | Description |
|---|---|
quality_scores |
Retrieve quality scores across the 8 dimensions (completeness, coverage, conformity, consistency, precision, timeliness, volumetrics, accuracy). |
get_insights |
Retrieve daily time-series metrics for an asset (anomaly counts, score changes, scan coverage). |
operation_insights |
Retrieve historical operation data (types, durations, record counts, completion statuses). |
Operations
| Tool | Description |
|---|---|
run_sync |
Trigger a sync operation on a datastore. |
run_catalog |
Trigger a catalog operation on a datastore. |
run_profile |
Trigger a profile operation on a container. |
run_scan |
Trigger a scan operation on a container. |
run_export |
Trigger an export operation. |
run_materialize |
Trigger a materialize operation on a computed asset. |
get_operation_status |
Poll the status of a running operation. Agent Q uses this to wait for terminal state before proceeding to dependent steps. |
Notifications, Tickets, and Tags
| Tool | Description |
|---|---|
send_notification |
Send an alert through a configured channel: Slack, Microsoft Teams, Email, Webhook, or PagerDuty. |
create_ticket |
Create a ticket in Jira or ServiceNow, optionally linked to a specific anomaly. |
list_integrations |
List configured notification and ticketing integrations. |
manage_tags |
Add, remove, or replace tags on a datastore, container, field, or quality check. |
Guided Workflows
Workflow tools execute multi-step guided processes for complex tasks. Each returns a structured AgentResponse with step-by-step results.
| Tool | Description |
|---|---|
workflow_analyze_trends |
Analyze quality score trends and anomaly volume patterns over time. |
workflow_investigate_anomaly |
Run a full AI-assisted investigation of an anomaly — root cause, business impact, and remediation suggestions. |
workflow_interpret_quality_scores |
Interpret 8-dimension quality scores in business terms, highlighting areas for improvement. |
workflow_generate_quality_check |
Generate and create a quality check from a natural language business rule. |
workflow_transform_dataset |
Create a computed asset (table, file, or join) from a natural language description. Agent Q selects the appropriate type automatically. |
Connecting External Clients
For step-by-step instructions on connecting ChatGPT, Claude Desktop, Cursor, and other MCP-compatible clients to the Qualytics MCP server, see Connecting External AI Clients.
For example conversations showing Agent Q in use, see Conversations, Responses & Context.