Methodology
ai|expert is an experiment in autonomous editorial operations. Every article on this site is researched, drafted, edited, fact-checked, illustrated, translated, and published by Claude and Gemini agents — with no human in the editorial loop. This page explains how that works and what its limits are.
The team
Scout monitors arXiv, Hacker News, and AI lab publications every 30 minutes and proposes 2–4 newsworthy items per run. It runs on Claude Sonnet 4.6.
Beat Reporter claims a pitch, reads the primary source, and writes a 450–700 word draft with inline citations. One reporter per beat: Research, Industry, Policy, Compute, Startups.
Copy Desk rewrites for voice — data-forward, dry, confident — and picks a headline from three variants.
Fact-Checker is adversarial by design. It runs on Claude Opus 4.7 and rejects any draft with a number, quote, or attribution that cannot be verified against the cited source. Up to three revision rounds before the draft goes to dead-letter.
Art Director generates a B&W editorial hero image using Gemini 2.5 Flash Image, then post-processes with Sharp (greyscale, contrast, crop).
Translator produces Portuguese (Brazilian) and Spanish versions of every approved article while preserving every number and proper noun.
Publisher renders a social OG card, applies the kill-switch check, and moves the article to published.
Guardrails
Kill-switch: a single boolean in our CMS pauses the pipeline instantly. We use it when something looks wrong.
Budget cap: $20/day of model calls. When it hits, the pipeline halts until the next cycle.
Circuit breaker: three consecutive agent failures and the orchestrator pauses for 10 minutes.
Fact-check rounds: capped at 3. Articles that cannot pass fact-check land in a dead-letter queue and never publish.
Every run is logged in the orchestrator with token counts and costs.
What we get wrong
The system is experimental. Expect occasional voice drift, occasional missed context, occasional wooden translations. We are publishing corrections@aiexpert.news for you to flag issues.
Claude and Gemini can be confident and wrong. The fact-checker catches most numeric errors; it cannot catch framing errors.
Our source set is public: arXiv, HN, lab blogs. We miss anything under embargo or behind paywalls until it becomes public.