FAQ

For prospects and beta users

What does aswritten do?

aswritten lets you install human expertise into AI and verify where every claim comes from. An expert — a practitioner, consultant, author, or your own team — curates their perspective into a living body of knowledge. That perspective gets installed into AI at the moment of need, and every claim the AI makes traces back to the human source.

How is this different from a Claude.md file or project instructions?

A Claude.md file is static instructions you write and maintain by hand. It tells the AI what to do — but it doesn’t install perspective. It provides information, not judgment. aswritten gives your AI the way an expert thinks about a domain — the opinions, the processes, the judgment calls earned through practice. The knowledge graph has conviction levels (how settled is this?), provenance (who said it and when?), and domain structure. When your thinking evolves, the perspective updates and every AI session reflects it. A Claude.md can’t do that.

Why not just use RAG?

RAG retrieves documents. aswritten installs perspective. The difference: RAG gives your AI more information (here are some relevant docs). aswritten gives your AI direction (here’s how this expert actually thinks about this domain, what’s settled vs. in debate, and what judgment calls to make). The knowledge graph isn’t a document store — it’s structured expertise with conviction levels and relationships. Your AI doesn’t just have access to more data; it reasons the way the expert reasons.

How does this prevent AI hallucinations?

Every claim in the perspective traces back to a specific person, in a specific context, at a specific time. When your AI makes a recommendation grounded in installed expertise, it cites the source. When it doesn’t have expertise to draw on, it flags the gap. You can see what’s grounded vs. what’s the AI’s own reasoning. The system doesn’t eliminate hallucination — it makes it visible.

What makes aswritten different from Notion AI / Glean / other knowledge tools?

Those tools search your existing documents. aswritten installs expertise that may never have been documented — the reasoning behind decisions, the methodology that makes your best people effective, the context that new hires spend months absorbing. It structures that knowledge into a perspective that steers AI behavior, not just informs it. And it lives in Git, not a vendor cloud — branchable, reviewable, diffable, yours.

I was invited to a perspective — what do I do?

Your team admin has set everything up; you’re not building from scratch. Three things to do:

  1. Log in with the email from your invite — no sign-up needed; you’re already on your team’s tier.
  2. If you’re using Claude Desktop: create a Project, paste the contents of your team’s knowledge repo CLAUDE.md (or ASWRITTEN.md) into the Project’s custom instructions, and always start chats from inside the Project. If you don’t have repo read access, ask your team admin to paste the file directly.
  3. If you’re using Claude Code: clone the team’s knowledge repo and start Claude Code from inside it. The system prompt loads automatically.

Then type aswritten in a chat and ask whatever you’d ask a senior person on the team. The full walkthrough is at Quick Start: If you’ve been invited to a perspective.

Why isn’t aswritten responding to my prompt?

Most often this is the Claude Desktop project gotcha. Two things both have to be true:

  1. The Project’s custom instructions must contain your perspective’s CLAUDE.md / ASWRITTEN.md content (the connector alone doesn’t load the workflow).
  2. Chats must be started from inside the Project, not from Claude Desktop’s home screen. If you start from the home screen, the connector is available but the workflow instructions don’t load — and the experience is significantly worse.

If you’re sure both are true and it’s still not responding: try typing aswritten explicitly to wake the connector, or check that you’re authenticated (/mcp in Claude Code, or Settings → Connectors in Desktop).

Do I need GitHub access?

For most things, no. To use a perspective on Team or Organization, you need to be added to the team — Escher and other orgs think of this as a “license.” That alone gives you everything most people need: load the perspective, ask questions, save memories with remember, and get cited answers.

You only need GitHub access for two things:

  1. Reviewing and merging the PRs that get created when memories are saved. Someone has to merge them — but it doesn’t have to be you. Most teams have a lead who handles this.
  2. Using Claude Code or Cowork with the repo cloned locally — those workflows read the repo files directly, so you need access to clone.

Many team members will never touch GitHub. The team admin handles repo-level operations; everyone else just uses the perspective.

This is separate from “Do I need a GitHub account?” (answered below) — that’s about whether you have an account at all; this is about whether you have access to a specific team’s knowledge repo.

How do I bootstrap? What should I do first?

Follow the Quick Start to connect the MCP server. Then just tell your AI “aswritten” — it walks you through connecting your repo, scaffolding the initial files, and creating your first memory. Your first memory is a conversation: the AI interviews you about your project and saves what it learns. That’s it. The extraction pipeline handles the rest.

What makes a good memory?

Direct quotes. The reasoning behind decisions, not just the decisions. Who was involved, what alternatives were considered, and why you chose what you chose. Memories are source material — the extraction pipeline needs primary material to work with. A memory that says “we decided to use PostgreSQL” is weak. A memory that captures why, what you considered, and what tradeoffs you accepted is strong.

How do I know it’s working?

After your first memory merges, load the perspective. Your AI produces a context callout showing what it now knows. Ask it a question about your project — if the answer cites your decisions with provenance instead of giving generic advice, it’s working. Use introspect to see coverage by domain and identify where knowledge is thin. The gap analysis is itself proof of the system’s awareness.

Should I worry about PRs and GitHub?

Not really. The AI handles the Git workflow — saving memories, opening PRs, managing branches. You review the PR to see what knowledge was extracted (and approve or adjust), then merge. If you know Git, you’ll appreciate the auditability. If you don’t, the AI abstracts it. The PR review is optional but valuable — it’s your chance to see exactly what the system understood from your words.

Do I need a GitHub account?

For Expert and above, we manage your repository for you — no GitHub account required. Optionally connect GitHub for multiple repos and version control access. For Team, you connect your own GitHub account for live collaboration. For Free, no account of any kind is needed — just connect and browse.

How does the AI “think like an expert”?

When you install a perspective, the AI loads a structured snapshot of that expert’s knowledge — organized by domain with conviction levels that tell the AI what’s settled vs. what’s still in debate. The AI cites this perspective when making recommendations, flags when it’s working outside the expert’s knowledge, and can combine multiple perspectives in a single session. Install a grant writing perspective and a disability-inclusive design perspective together, and your AI reasons across both.

Pricing

What’s the difference between Team and Organization?

Scope and privacy. On Team ($400/month), your team collaborates on shared perspectives — live, not static snapshots. You own your repos and can publish to the marketplace. On Organization ($4K/month + engagement), you create a private catalogue for your organization — role perspectives, department expertise, onboarding frameworks — all internal, all auditable. We create the perspectives for you: interview your senior people, process existing docs, and deliver working perspectives. Organization also includes on-prem deployment, dedicated support, and unlimited seats. See Pricing for the full comparison.

Can I use this on-prem? (HIPAA, GDPR, compliance)

Yes. On-prem deployment is available on the Organization tier when compliance requires it. The architecture separates deterministic operations (commit, assemble — these run locally or in your infrastructure) from LLM operations (remember, introspect, generate — these route through your approved providers via your own API keys). Zero data residency: nothing leaves your network. This covers HIPAA, GDPR, data sovereignty, and ISO 27001 requirements.

Technical / architecture

How is git-native different from model-held context?

Model-held context (like Copilot Memories or Claude’s memory) is scoped to one tool, one user, one conversation thread. It’s not version-controlled, not reviewable, and it disappears if you switch tools. Git-native means your expertise is code — branchable, reviewable, diffable, portable across every AI tool. Multiple people collaborate on how your AI thinks through the same workflows they already know.

What is “perspective vs. information”?

Information is facts — what search and RAG provide. Perspective is how to interpret, frame, and apply information — the opinions earned through practice, the judgment calls, the sense for when the conventional wisdom is wrong. Wisdom is knowing when and how to apply perspective in context. Every existing AI tool operates at the information layer. aswritten operates at the perspective layer. No amount of better retrieval turns information into perspective. They are different things.

What is the extraction pipeline? How do memories become knowledge?

  1. You save a memory via the remember tool
  2. Extraction runs synchronously inside the tool call (~2-3 minutes)
  3. An LLM reads the memory and produces SPARQL transactions — structured knowledge statements (entities, relationships, conviction levels)
  4. Transactions are validated against the ontology
  5. Memory and TX files are committed together atomically — if extraction fails, nothing is committed
  6. A PR opens showing exactly what was extracted — you can review every fact
  7. You merge; the perspective updates on next load

Every fact in the graph traces back to the memory it came from.

How does the ontology work? Can I customize it?

The ontology defines entity types (Actors, Claims, Decisions, Narratives), relationships, and conviction levels. It’s an RDF schema — the knowledge graph is structured data, not free text. The base ontology covers common knowledge patterns and has proven domain-agnostic across organizational strategy, clinical practice, engineering, and consulting contexts without modification. Custom ontology extensions are available on the Organization tier for domain-specific needs.

How does review mode work? What are lenses?

When a memory is extracted and a PR opens, the review process uses an optometrist model — iterative convergence through shifts. The system identifies domains affected by the new knowledge and generates perspective shifts: “Better through lens A or lens B?” You converge on the right interpretation through comparison, not by evaluating claims in isolation. This catches misinterpretations before they enter the perspective.

What does “underneath the text” mean?

Every document — a blog post, a pitch deck, a status report — has expertise underneath it. aswritten makes that layer explicit. When you generate content from installed expertise, every factual claim can be annotated: grounded (traces to a specific memory from a specific person) or ungrounded (gap in knowledge). The cite tool maps any text against the knowledge graph and reports what’s supported vs. what’s asserted without provenance. The text is the surface; the graph is what’s underneath.


This site uses Just the Docs, a documentation theme for Jekyll.