RedactAI
Checking API
D

Benchmark your redaction models.

Compare local redaction quality against a stronger audited baseline. Scores use only the shared direct-identifier scope; model-only detections are separated for review.

Ready No hosted benchmark run yet
Documents
0
Upload or paste to begin
Best F1
--%
Baseline optional
Local speed
-- tok/s
OpenAI PF + custom
Fastest file
--s
Latency tracked per file
Baseline
Reference model Grok baseline xAI Modelgrok-4.20 NR ModeNon thinking ScopeDirect IDs
Offline redaction model
Selected files
0
Paste text or upload files
Local latency
--s
Average per file
Baseline latency
--s
Optional xAI pass
Total cost
$0
Baseline provider only
Benchmark inputs Use the hosted API with the same cookie auth as this UI. No API key is exposed to the browser.

Upload, paste, or select a benchmark file to begin

Run one file or a batch. The progress stream shows extraction, local model timing, baseline timing, and final per-document score data.

PDF · DOCX · XLSX · PPTX · RTF · HTML · TXT · CSV · MD · JSON · ZIP
Selected files Drop files here or use Upload files.
Running benchmark...
0%

Document summary

Latency, throughput, precision, recall, and F1 by file.
Document Words Tokens Local latency Local tok/s Local items Grok latency Grok tok/s Precision Recall F1 Missed Cost

Document details

Open a row for redacted output and audit details.