Product Manual

What is aswritten?

aswritten lets you install human expertise into AI and verify where every claim comes from. It gives your AI a structured, version-controlled perspective — an expert’s decisions, methodology, rationale, and relationships — so it thinks like that expert instead of hallucinating.

The product is an MCP server. You connect it to Claude Code, Claude Desktop, or any MCP-compatible AI tool. From there, you talk to your AI normally. aswritten works underneath — grounding responses in installed expertise and citing the source.

Everything lives in Git. Memories go in, knowledge comes out, and every fact traces back to a person and a context.


Getting Started

See the Quick Start for installation and first-time onboarding.


Core Concepts

Expertise as a Knowledge Graph

Expertise is the sum of what you know — not just documents, but decisions, reasoning, relationships, and context. aswritten stores this as an RDF knowledge graph in your Git repo.

Unlike a wiki or a knowledge base, an aswritten perspective is:

  • Narrative-driven — memories are written in natural language, preserving nuance and reasoning
  • Provenance-tracked — every fact traces back to who said it, when, and in what context
  • Version-controlled — knowledge branches, merges, and has full audit history via Git
  • Machine-readable — the RDF graph structure means AI tools can query and reason over it

Your Perspective

When you start a session, your AI loads your perspective — a structured snapshot of the knowledge graph that grounds every response. The perspective covers domains like Strategy, Product, Architecture, Organization, and Proof.

Conviction Levels

Every claim in the graph carries a conviction level — how settled the knowledge is:

  • notion — First mention, casual observation, untested hypothesis. Easily moved.
  • claim — Asserted, still validating. Committed but moveable with evidence.
  • decision — Settled. Requires significant counter-evidence to revisit.
  • principle — Bedrock. Career-arc level conviction.

Memories and Transactions

A memory is a source document — a conversation transcript, meeting notes, a written reflection. Memories live in .aswritten/memories/.

A transaction is the structured knowledge extracted from a memory — RDF/SPARQL statements that update the graph. Transactions live in .aswritten/tx/.

The relationship: you write memories (natural language), the extraction pipeline produces transactions (structured knowledge), and the pipeline assembles transactions into your perspective.


Daily Workflow

Session Start: Perspective

Every session begins with loading the perspective. Your AI calls perspective to load the current snapshot. This grounds the entire session — without it, responses are generic.

After loading, the AI produces a context callout showing what it knows:

aswritten context — Perspective loaded. Strong coverage in Strategy and Product. Architecture is sparse. No coverage of competitive landscape.

Saving Knowledge: Remember

When a decision is made, a perspective is shared, or important context comes up in conversation, the AI offers to save it as a memory.

The workflow:

  1. Draft — The AI drafts a memory from the conversation, preserving direct quotes and reasoning
  2. Review — You review and refine the draft. Memories are closer to PRs than commits
  3. Save — The memory commits to a topic branch in your repo
  4. Extract — Extraction runs synchronously (~2-3 minutes), producing SPARQL transactions
  5. PR Review — A PR opens showing exactly what was extracted
  6. Merge — You review and merge. Your perspective updates on next load

Finding Gaps: Introspect

Introspect analyzes what’s documented and what’s missing. It surfaces knowledge gaps by domain and suggests questions to fill them.

Modes:

  • analysis — Coverage metrics, structural health, domain breakdown
  • interview — Gaps formatted as questions you can answer to fill them
  • working_memory — Score a draft memory against identified gaps before saving

Verifying Content: Cite

Cite takes any text — a blog post, a pitch deck, a status report — and checks every claim against the knowledge graph. It returns a coverage score and flags unsupported claims.

Use it to:

  • Verify a document before sending it externally
  • Find claims that should be saved as memories
  • Check whether your AI’s output is grounded or hallucinated

Review Mode

The Review Cycle

When a memory is saved and extracted, a PR opens. The review process uses an optometrist model — iterative convergence through shifts.

Phases:

  1. TX Summary — What this PR changes about what the org believes
  2. Zone Identification — Cluster changes by domain, pick 2-3 to exercise
  3. Test Document — Select a document from the repo, conversation, or generate a fragment from the perspective
  4. Baseline — Compare the document before and after the new knowledge
  5. Shifts — Generate 2-4 targeted perspective shifts. “Better through A or B?”
  6. Fact Review — Walk through extracted entities, conviction levels, relationships
  7. Wrap-up — Converged draft, review memory saved, merge

Finding Open Reviews

Call review (no args) to see PRs waiting for your review. Pick one and call review with the PR number.

Post-Merge Doc Check

After merging, the system can suggest which documents may need updating based on the merged knowledge.


Multi-Repo and Team Features

Multiple Repositories

aswritten works across repos. Each repo has its own knowledge graph. Use switch-repo to change which repo the AI is working with, or pass owner and repo to any tool call.

Directory-Scoped Perspective

For client isolation — consultancies, agencies, or multi-tenant setups — use directory-scoped loading. Each directory under your repo can have its own .aswritten/ ledger. Load with dir=clients/acme to get that client’s perspective merged with the root.

Directory scoping is for data isolation, not topic organization. Use introspect for semantic filtering.

Sharing Knowledge

Share expertise with other users via share. The recipient gets a notification and can import the bundle into their own repo with import.

Seats and Plans

  • Free — Browse and install perspectives. Experience the value.
  • Expert ($81/mo) — Build your own perspective. All core tools.
  • Team ($400/mo) — Distribute your expertise. Team of 10. Marketplace publishing.
  • Organization ($4K/mo + engagement) — Private catalogue. Professional perspective creation. On-prem available for compliance (HIPAA, GDPR).

Manage your plan with manage-account. Check usage with check-budget.


The Extraction Pipeline

How It Works

  1. You save a memory via the remember tool
  2. Extraction runs synchronously inside the tool call (~2-3 minutes)
  3. An LLM reads the memory and produces SPARQL transactions
  4. Transactions are validated against the ontology
  5. Memory and TX files are committed together atomically — if extraction fails, nothing is committed
  6. A PR opens with the memory and transactions
  7. You review and merge
  8. Your perspective updates on next load

Timing

Extraction runs synchronously inside the remember tool call (~2-3 minutes). After remember returns, reload the perspective to pick up new knowledge.

The Manifest

.aswritten/manifest.json tracks pipeline state — which memories have been processed, which transactions are current. This is managed automatically.


The Ontology

The knowledge graph uses an RDF ontology that defines entity types, relationships, and constraints. Call ontology to see the current schema.

Custom Ontology

Organizations can extend the base ontology for domain-specific needs. (Coming soon — contact us for enterprise customization.)


File Structure

.aswritten/
├── memories/          # Source documents (markdown)
├── tx/                # Extracted transactions (SPARQL)
├── manifest.json      # Pipeline state

ASWRITTEN.md               # AI behavioral protocol

Tool Reference

Core tools

These are the five you use every day:

Tool Purpose When to Use
perspective Load your perspective Session start, after branch switch, after extraction completes
cite Verify text against expertise Before publishing, checking AI output
introspect Find knowledge gaps Before saving memories, assessing coverage
remember Save knowledge to expertise After decisions, interviews, discussions
review Review and refine knowledge PRs When extraction produces a PR awaiting review

Account and repo management

Tool Purpose When to Use
init-repo Initialize repo for expertise First-time setup
manage-repos Install GitHub App Connecting new repos
list-repos See connected repos Before switching repos
switch-repo Change active repository Multi-repo workflows
manage-account Upgrade plan, add API key Account management
check-budget View API usage Monitoring usage

Sharing and knowledge management

Tool Purpose When to Use
share Share perspective with another user Onboarding teammates, cross-org sharing
import Import shared perspective Receiving shared knowledge
forget Retract knowledge from a perspective When knowledge is outdated or incorrect
ontology View RDF schema Understanding graph structure

Troubleshooting

“The perspective is empty”

You haven’t saved any memories yet, or the PR hasn’t merged. Save your first memory and merge the PR.

“Perspective returns sparse results”

Check if there are pending shares to import (import with no arguments). If not, use introspect in interview mode to identify what domains need memories.

Extraction is taking too long

Extraction runs synchronously inside the remember tool call and typically takes ~2-3 minutes. If it seems stuck, check the tool output for errors.

BYOK (Bring Your Own Key)

Add your own OpenRouter API key via manage-account to avoid platform usage limits.



This site uses Just the Docs, a documentation theme for Jekyll.