Product

Sessions & HITL

Every workflow run becomes a session: inputs, sources, tool calls, outputs, validations, and human decisions powering governance and continuous improvement.

Human-in-the-loop operations illustration

What it does

Sessions give you end-to-end traceability: what the system saw, what it retrieved, the steps it took, and the final outcomes with human oversight built in.

Session trace
Inputs, retrieved sources, agent steps, tool calls, outputs, and validations in one artifact.
HITL queues
Route exceptions to the right team with SLAs, evidence view, and one-click approvals.
Corrections
Field-level edits and structured feedback for high-quality training data.

How it works

What is inside a session

A session is the unit of traceability and learning across the Xong Platform.

Example: Intelligent verification
Orchestrator plans the steps, calls modules and tools, and logs everything as a session.
Ingest case bundle
docs / media / metadata
Extract + normalize
document AI module
Cross-validate
rules + model check
Confidence gate
threshold + policy
HITL review (if needed)
approve / correct / reward
ERP/CRM action via WebKit
structured tool call

Feature highlights

Full visibility into what AI does

Session trace
Inputs, retrieved sources, agent steps, tool calls, outputs, and validations in one artifact.
HITL queues
Route exceptions to the right team with SLAs, evidence view, and one-click approvals.
Corrections
Field-level edits and structured feedback for high-quality training data.
Reward signals
Human rewards feed interval training and preference learning to increase accuracy over time.
Governance
Audit logs, RBAC, and approvals for high-impact actions with defensible traces.
Quality analytics
Track escalation rate, error types, drift, and workflow success KPIs across versions.

Review ops signals

See queue health, corrections, and quality over time.

Sessions expose what was reviewed, changed, approved, and fed back into learning loops.

  • Queue health and SLA visibility for every reviewer group.
  • Correction density by field and document type.
  • Quality trends per workflow, model, and version.
Review signal map
Illustrative distribution across review stages.
Review latency
Queue-based
SLA timers with bottleneck alerts.
Correction rate
Field-level
Structured edits for training.
Quality trend
Tracked
Version-over-version comparisons.
Trace capture
Complete
Inputs, sources, and decisions.

Proof

Governed operations in production

Teams use Sessions and HITL queues to approve actions and correct outputs with full visibility.

Session dashboard illustration
{
  "inputs": ["case_bundle.zip", "email_thread.eml"],
  "retrieval": [{"source":"Data House", "refs":["docA#p3","policy#12"]}],
  "steps": [
    {"agent":"DocAgent","action":"extract_fields"},
    {"agent":"Verifier","action":"cross_validate"},
    {"agent":"ToolAgent","tool":"erp.updateInvoice","status":"ok"}
  ],
  "validation": {"confidence":0.93, "rule_checks":"pass"},
  "hitl": {"required": false, "reviewer": null},
  "output": {"status":"approved", "payload_schema":"InvoiceUpdate.v2"},
  "telemetry": {"latency_ms": 4120, "cost_est": 0.18}
}
HITL operations

Want to build a review loop that improves accuracy?

We will design HITL checkpoints, correction schemas, reward signals, and interval training so your system gets better every week.