review infrastructure for
ai‑native content teams

Typescape is the structured review layer for content your agents produce. Capture human feedback as persistent rules and rulepacks your AI consumes before it writes — so corrections compound and quality gates stick.

works with any agent

Claude Code Cursor Codex VS Code OpenCode Amp Antigravity Claw

ai-native from day one

Three surfaces. One contract.

Every operation available in the API is available in the CLI and MCP. Same schema, same idempotency, same structured export.

CLI

Create sessions, export findings, manage rules from your terminal. Composable in scripts, CI pipelines, and agent harnesses. First-class JSON output.

typescape session create \
  --repo ./content --file blog.md
typescape session export --format json

MCP

Model Context Protocol tools for Claude Code, Cursor, and any MCP-compatible agent. Your AI creates sessions and consumes feedback natively.

// In your agent's CLAUDE.md
mcp: typescape
tools:
  - typescape_create_session
  - typescape_export_review

REST API

Schema-versioned, idempotent, stable envelopes. Build custom integrations, webhooks, and automated pipelines. The canonical automation surface.

POST /v1/sessions
GET  /v1/sessions/:id/export
GET  /v1/rules?scope=workspace

features

Everything between feedback and governance

Structured review sessions

Block-level anchoring, severity classification, magic links for non-technical reviewers. Findings survive content edits. Schema-versioned JSON export.

Rules & rulepacks

Promote findings to published rules. Compile to local rulepacks your agents load before writing. Deterministic matching on explicit properties.

Governance memory

Feedback compounds into organizational knowledge. New hires inherit 47 rules on day one. New AI sessions load them before generating.

Magic link reviewer access

No signup, no install, no training. Reviewers click a link and start reviewing in under 60 seconds. One-time email verification.

Structured export & handoff

JSON export with integrity hashing. Agent handoff bundles. Resumable work artifacts. Plain-text rule bundles. Schema v2 with versioned contracts.

Revision lineage

Supersede chains track every revision. See what changed, why, and which rules the agent applied. Full provenance from first draft to published.

Multi-tenant workspaces

Separate rule namespaces per client. Agencies manage voice profiles that don't bleed across accounts. Unified reporting, isolated governance.

Quality packs

Pre-built bundles of steering rules, templates, and generation profiles. Install Medical Content QA, Marketing Copy, or Developer Docs Standard and get immediate value.

Immutable audit trail

Every session, finding, decision, and rule change is immutable and exportable. Version-pinned feedback. Compliance-ready for healthcare, finance, legal.

the compounding loop

Every review makes the next draft better

Four steps. Each one feeds the next. The third review is faster than the first because the system remembers what your reviewer taught it.

1

Submit for review

Pull from your repo or paste markdown. Share a magic link — reviewers open it and start. No account needed.

2

Reviewers find issues

Every finding is anchored to a block, carries a severity, and persists as structured data. Not a comment — a finding.

3

Agents consume the export

Export as schema-versioned JSON. Feed it to your agent via API, CLI, or MCP. The agent applies findings and creates the next revision.

4

Feedback becomes rules

Spot a pattern? Promote it. Future content is checked automatically. The 47th time you give the same note is the last time.

start building in seconds

Templates for the work that matters

Agency governance

Multi-client brand voice review

Separate rule workspaces per client. Import voice profiles, run structured review on AI drafts, export correction data the agent applies next run.

rules multi-tenant export

Content pipeline

AI blog post quality gate

Agent generates draft, CLI creates session, reviewer marks findings, export feeds back to agent. The review step becomes an API call in your CI pipeline.

CLI automation CI/CD

Docs review

Markdown docs with non-technical reviewers

Your docs live in Git. Your reviewers don't. Share magic links for structured feedback on markdown files without Git friction.

magic links markdown docs-as-code

Compliance

Healthcare content governance

Install the Medical Content QA pack. 15 pre-built rules for terminology, citation requirements, and claim verification. Immutable audit trail for regulatory review.

quality pack healthcare audit

Voice enforcement

Brand voice rulepack for agents

Extract voice rules from existing approved content. Compile to a local rulepack your agents load before generating. Voice drift stops here.

rulepacks voice MCP

Operations

Editorial calendar with review gates

Integrate review sessions into your publishing workflow. Track which drafts passed review, which are blocked, and which rules are being triggered most.

workflow tracking API

pricing

Start free. Scale when you're ready.

Priced per session, not per seat. Unlimited reviewers on every plan. Full CLI, API, and MCP access included.

Free

$0

15 sessions/month

  • Unlimited reviewers
  • Magic link sharing
  • JSON export + CLI + MCP + API
Get started
Most popular

Pro

$79/mo

100 sessions/month

  • Everything in Free
  • Rules workspace + publishing
  • Revision lineage
  • Repository integrations
Start free trial

Scale

$249/mo

500 sessions/month

  • Everything in Pro
  • 5 rule workspaces
  • Multi-tenant isolation
  • Team management + priority support
Start free trial

Frequently asked questions

How is this different from Google Docs comments?
Google Docs comments disappear when resolved. No structured export. No severity. No block-level anchoring that survives edits. No rules that carry forward. Typescape produces structured data machines can act on.
Do reviewers need accounts?
No. Magic links with one-time email verification. Reviewers click a link and start reviewing. No signup, no install, no training. Under 60 seconds.
Is this an AI content detector?
The opposite. AI detectors ask "is this AI?" We ask "is this good?" The actual need is quality review — does it match your voice, your standards, your domain rules? That's what Typescape does.
How is this different from Grammarly or Writer?
They enforce preset rules. Typescape captures your reviewers' judgment and makes it reusable. The rules come from your team's actual feedback, not from an algorithm. That's the difference between a linter and organizational memory.
We already have a brand style guide.
A style guide is a document. Typescape is infrastructure. Your guide tells people what to do. Typescape verifies that they did it — and learns what "good" actually means from your reviewers' behavior.

Your review feedback is the most valuable thing your team produces. Stop losing it.