Bulk anomaly triage
Goal
Acknowledge, assign, archive, or close out anomalies in batches rather than one click at a time. Once scans are running daily on multiple datastores, the only practical way to keep the queue under control is bulk operations from the CLI.
Permissions
| Step | Endpoint | Role | Team permission |
|---|---|---|---|
| List anomalies | GET /api/anomalies |
Member |
N/A |
| Get a single anomaly | GET /api/anomalies/{id} |
Member |
Reporter |
| Update one anomaly | PUT /api/anomalies/{id} |
Member |
Editor |
| Bulk update | PATCH /api/anomalies |
Member |
Editor |
| Bulk archive (soft delete) | DELETE /api/anomalies?ids=... |
Member |
Author |
| Hard delete | DELETE /api/anomalies?ids=... |
Member |
Author |
Prerequisites
- The CLI is installed and authenticated.
- The datastore has been scanned recently and has anomalies to triage.
- Optional: user IDs for the assignees you want to set (
qualytics users list).
CLI workflow
graph LR
L[List with filters] --> Decide{Action}
Decide -->|Investigating| U[update --status Acknowledged]
Decide -->|Confirmed real, fixed| AR[archive --status Resolved]
Decide -->|Same as another anomaly| AD[archive --status Duplicate]
Decide -->|Not actually an issue| AI[archive --status Invalid]
Decide -->|Noise / not relevant| AS[archive --status Discarded]
Filter precisely first
# All Active anomalies for a single container
qualytics anomalies list --datastore-id 42 --container 100 --status Active
# Critical-tagged anomalies created in the last week
qualytics anomalies list \
--datastore-id 42 \
--tag "critical" \
--start-date 2026-05-01 \
--end-date 2026-05-08
# Shape anomalies only (schema changes)
qualytics anomalies list --datastore-id 42 --type shape
# All anomalies for a single check (regression analysis)
qualytics anomalies list --datastore-id 42 --check-id 555
Bulk acknowledge with assignees
qualytics anomalies update \
--ids "1001,1002,1003,1004" \
--status Acknowledged \
--description "Investigating in incident #INC-987" \
--assignee-ids "12,18" \
--tags "incident,oncall"
Bulk archive with the right outcome status
archive is a soft delete with an outcome reason; pick the one that reflects reality:
# Root cause fixed
qualytics anomalies archive --ids "2001,2002,2003" --status Resolved
# Duplicate of an existing investigation
qualytics anomalies archive --ids "2010,2011" --status Duplicate
# Not actually a problem (e.g., expected schema change)
qualytics anomalies archive --ids "2020" --status Invalid
# Generic noise
qualytics anomalies archive --ids "2030,2031" --status Discarded
Hard delete (rare; usually you want archive)
Behind the scenes
| CLI step | Method | Path |
|---|---|---|
anomalies list |
GET | /api/anomalies?datastore_id={id}&...filters |
anomalies get --id |
GET | /api/anomalies/{anomaly_id} |
anomalies update --ids |
PATCH | /api/anomalies |
anomalies archive --ids |
DELETE | /api/anomalies?ids=... (with archive=true and status) |
anomalies delete --ids |
DELETE | /api/anomalies?ids=... (hard delete) |
Python equivalent
import os
import httpx
BASE_URL = os.environ["QUALYTICS_URL"].rstrip("/")
TOKEN = os.environ["QUALYTICS_TOKEN"]
HEADERS = {"Authorization": f"Bearer {TOKEN}"}
with httpx.Client(headers=HEADERS, timeout=30.0) as client:
# 1. Find the anomalies you care about
anomalies = client.get(
f"{BASE_URL}/api/anomalies",
params={
"datastore_id": 42,
"tag": "critical",
"status": "Active",
"start_date": "2026-05-01",
},
).json()
ids = [a["id"] for a in anomalies]
# 2. Bulk acknowledge with assignees
client.patch(
f"{BASE_URL}/api/anomalies",
json={
"ids": ids,
"status": "Acknowledged",
"description": "Investigating in incident #INC-987",
"assignee_ids": [12, 18],
"tags": ["incident", "oncall"],
},
).raise_for_status()
print(f"acknowledged {len(ids)} anomalies")
Variations and advanced usage
Filter by source-record enrichment
When source records are exported to the enrichment datastore, those anomalies are remediation-ready. Surface them first:
Auto-resolve from the scan itself
For anomalies that disappear naturally (the failing rows are no longer in the data), let the scan close them out:
This removes the need to manually archive transient issues.
Reassign on team change
If someone moves teams, transfer their open anomalies in one call:
LEFTOVERS=$(qualytics anomalies list --datastore-id 42 --status Active,Acknowledged \
--format json | jq -r '.[] | select(.assignees[]?.id == 12) | .id' \
| paste -sd, -)
qualytics anomalies update --ids "$LEFTOVERS" --assignee-ids "18"
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
update returns 422 |
The --status is invalid for an update (only Active or Acknowledged) |
For other statuses, use archive not update. |
archive --status Resolved returns 422 |
Resolved, Duplicate, Invalid, Discarded are valid for archive but not update |
Use archive, not update, for these statuses. |
| Some IDs in bulk fail | Mixed permissions across teams (anomalies span multiple datastores) | Run separate calls per datastore so failures are isolated. |
--assignee-ids "" doesn't clear assignees |
Pass an empty value explicitly, not just omit the flag | --assignee-ids "" (with empty string) clears the list. |
| Filter returns nothing but there are clearly active anomalies | Wrong date format | Dates must be YYYY-MM-DD, no time portion in --start-date / --end-date. |
Related
- Anomalies command reference
- Daily triage automation: the same flow but on a schedule.
- Daily sync, profile, and scan: how anomalies get created in the first place.