Open-source CLI · Veriscope (AGPL-3.0 + Commercial)

Veriscope is the CLI for auditable training runs

CLI-first training monitor for auditable capsule bundles: record runs, validate artifacts, and gate cross-run diffs by explicit comparability predicates.

veriscope — quick preview
# Run monitored training (runner wrapper)
$ veriscope run gpt --outdir run_01 -- --dataset tiny --seed 123

# Validate the capsule bundle
$ veriscope validate run_01

# Diff two runs only when comparable under contract
$ veriscope diff run_01 run_02
Contract-first epistemic infrastructure for training runs whose comparisons must remain auditable.

An artifact contract, not a dashboard

Veriscope's core output is a capsule bundle you can validate, report, and diff. Comparisons are only admitted when runs remain comparable under contract.

Auditable capsule bundles

Every run emits standard files such as window_signature.json, results.json, and results_summary.json, plus optional governance and provenance artifacts.

Validate / Report / Inspect

Use the CLI to verify bundle integrity, inspect key fields, and generate text or Markdown reports without trusting embedded hashes.

Diffs gated by comparability

Cross-run diffs are blocked unless the runs are non-partial and still match the same window signature, preset identity, and compare-time governance checks.

Governance without mutation

Record approvals and notes as append-only overlays without altering raw artifacts. What happened and what your team decided stay separate and auditable.

Window-scoped calibration

Calibration emits FAR, delay, and overhead results together with either a preset candidate or an explicit rejection, all scoped to one window-signature hash.

Runners + pilot kit

Start with runner wrappers for CIFAR, nanoGPT, and Hugging Face workflows, then use the pilot harness for control versus injected-pathology runs with shareable outputs.

From training run to decision record

Declare a window signature, run alongside training, emit capsule artifacts, validate them, then report and diff only when the contract admits comparison. Governance overlays remain append-only.

  • Decisions live in artifacts with pass, warn, fail, and skip states
  • Comparability gates prevent silent-drift comparisons
  • Governance overlays preserve raw artifacts without mutation
  • Calibration protocol remains scoped to the window that produced the preset
veriscope — full workflow
# 1) Run a monitored training job (runner wrappers)
$ veriscope run gpt --outdir OUTDIR -- --dataset tiny --seed 123

# 2) Validate the capsule bundle
$ veriscope validate OUTDIR

# 3) Produce a shareable report
$ veriscope report OUTDIR --format md > report.md

# 4) Diff two runs only when comparable under contract
$ veriscope diff OUTDIR_A OUTDIR_B

# 5) Record a manual judgement without mutating raw artifacts
$ veriscope override OUTDIR --status pass --reason "Known infra noise"
Python 3.9+
Runtime
CLI-first
Run · Validate · Report · Diff
CPU & GPU
Local + cloud
Dual license
AGPL-3.0 + Commercial

Run Veriscope in a pilot

Produce auditable artifacts, validate capsule integrity, and gate cross-run comparison under an explicit artifact contract.