VERSIO-AI · Frontier Lag · v1.0
Audit any AI-evaluation paper against three editorial questions.
Paste a DOI. The tool extracts what model the paper tested, when, and whether the paper's conclusion was scoped to that model or generalised to AI as a class. Companion to Frontier Lag (Gringras 2026).
What the tool checks
01
Model version
Did the paper name a specific snapshot you can resolve to a release date?
03
Evaluation date
Did the paper say when the model was queried? Without it, no frontier comparator is possible.
05
Capability frame
Did the conclusion stay scoped to the tested model, or generalise to AI as a class?