Onboarding Guide

This guide walks you through your first session with aswritten — from connecting the MCP server to having expertise your AI can work from and cite.

What you’re getting

Installable expertise for your AI tools. After setup, your AI loads your perspective — a structured snapshot of your expertise, decisions, and context. Your AI cites this perspective when making recommendations and flags when it’s working outside documented knowledge.

Two paths

This guide covers building a perspective from scratch — scaffolding a repo, creating your first memory, watching extraction run, and reloading the perspective with your knowledge in it.

If your team admin has already set up the perspective for you and you’re just joining, this guide is the wrong one. Head to the Quick Start “If you’ve been invited to a perspective” section instead — there’s no scaffolding, no GitHub setup, no first-memory interview to run. Then come back here for “Step 5: Reload the perspective” onward, which still applies.

Prerequisites

See the Quick Start for installation steps specific to your tool.

Step 1: Connect and initialize

After installing the MCP server (see Quick Start), open your AI tool and say “aswritten”. Your AI will:

  1. Connect GitHub — Install the aswritten GitHub App on your account or org, and select which repos to connect
  2. Initialize — Scaffold aswritten files in your repo: ASWRITTEN.md (the behavioral protocol), and GitHub Actions workflows (extraction pipeline)
  3. Confirm — You’ll see a commit on your default branch with the scaffolded files

This takes about 2 minutes. The scaffolding is a one-time setup per repo.

Step 2: Your first perspective load

After initialization, your AI automatically loads the perspective. Since there are no memories yet, you’ll see something like:

aswritten context — Perspective loaded. Identity is unpopulated. No domains have substantive content. Entering onboarding mode.

This is expected. The empty perspective is your starting point.

Step 3: Your first memory

This is where it gets interesting. Your AI interviews you about your project — what it does, key decisions you’ve made, who’s involved, what’s in progress. Talk naturally. The AI drafts a memory document from the conversation.

What to talk about:

  • What does your project/org do? Who’s involved?
  • What are the 2-3 most important decisions you’ve made recently?
  • What do new people always ask about? What takes the longest to explain?
  • What’s your strategy? What are you trying to accomplish this quarter?

What makes a good first memory:

  • Direct quotes and reasoning — not just “we chose X” but why
  • Context about who decided what and when
  • The stuff that lives in your head and never got written down

The AI presents a draft for your review. Iterate until it captures what matters. When you approve, the AI commits it to a branch in your repo.

Step 4: Watch the extraction pipeline

After you approve the memory, extraction runs synchronously (~2-3 minutes):

  1. Extraction — An LLM reads your memory and extracts structured knowledge: entities, decisions, relationships, conviction levels. Memory and transactions are committed together atomically
  2. PR opens — You’ll see a pull request with the extracted knowledge as SPARQL transactions
  3. Review — The PR shows exactly what the system understood from your words. Check that the facts are right. Adjust if needed.
  4. Merge — When you’re satisfied, merge the PR

Extraction happens during the save, so there’s no separate pipeline to watch.

Step 5: Reload the perspective

After the PR merges, tell your AI to reload the perspective. Now you’ll see something different:

aswritten context — Perspective loaded. Coverage in [domains from your memory]. Key decisions: [what you told it]. Sparse: [domains you didn’t cover].

Ask your AI a question about your project. If it cites your decisions with provenance instead of giving generic advice, your perspective is working.

Step 6: Keep going

The pattern repeats naturally from here:

  • Decisions come up in conversation → your AI offers to save them as memories
  • You ask questions → answers are grounded in documented knowledge
  • Gaps surfaceintrospect identifies what’s undocumented and suggests who to ask
  • Knowledge compounds → each memory makes every future session smarter

Use introspect in interview mode to see which domains are thin and get targeted questions to fill them. Each memory makes every future session smarter.

Adding teammates

On a Team or Organization plan:

  1. Each teammate installs the MCP server (see Quick Start)
  2. They authenticate and connect to the same repo
  3. On their first session, the AI loads the shared perspective — they immediately have access to everything the team has documented

Teammates can save their own memories. The extraction pipeline produces PRs, which means knowledge additions are reviewable before they enter the perspective. Different people contribute different domains — your architect documents architecture decisions, your PM documents strategy, your domain expert documents methodology.

For teams coming from a sales call

If you were onboarded during a sales call with aswritten, your first memory was already created during that conversation. Your AI already has a seeded perspective. Start from Step 5 — load it and see what’s there, then build from it.

Troubleshooting

“The perspective is empty” — You haven’t saved any memories yet, or the PR hasn’t merged. Save your first memory and merge the PR.

“Perspective is sparse” — Check if there are pending shares to import (the AI will check automatically). If not, use introspect in interview mode to identify what domains need memories.

Extraction is taking too long — Extraction runs synchronously inside the remember tool call and typically takes ~2-3 minutes. If it seems stuck, check the tool output for errors.

“I merged the PR but perspective hasn’t changed” — The AI caches the perspective. Ask it to reload explicitly, or start a new session.


This site uses Just the Docs, a documentation theme for Jekyll.