Asking the Perspective Well

This is a how-to for getting useful answers from a perspective. It assumes you’ve done Quick Start and your AI is loading the perspective when you say aswritten.

The short version: ask questions you’d ask a senior person on the team. The perspective is a structured snapshot of how that team thinks — decisions they’ve made, reasoning behind them, what’s settled and what’s still in flight, who said what. Questions that play to that shape get good answers.

What kinds of questions work well

Five patterns that pay off:

1. Decisions

“What did we decide about [a recent decision area]?” “Why did we choose [the current approach] over [the alternative]?”

Decisions are first-class in the graph. The perspective preserves not just what was decided but why — the reasoning, the alternatives considered, the tradeoffs accepted. Asking about a decision usually returns the decision itself plus the context behind it.

2. Gaps

“What’s missing from the perspective about [a domain]?” “What don’t we know about [a topic]?”

The AI can introspect the graph and tell you where coverage is thin. This is honest information — you’ll get back something like “the perspective is sparse on competitive landscape; only one memory mentions it.” Useful before you commit to a recommendation.

3. Evolution

“How has our thinking about [topic] changed over the last [month]?” “What used to be true about [thing] that we’ve since revised?”

Memories accumulate over time. The perspective can trace what changed, what superseded what, and who said what when. Useful for new joiners catching up, or for prepping a discussion that touches a previously-decided area.

4. Recommendations grounded in the perspective

“Given how we think about [domain], what would you recommend for [new situation]?” “How would [a senior person] approach [problem]?”

This is the move that gets the AI to apply the perspective, not just retrieve from it. The answer comes back as cited reasoning rather than generic best practice.

5. Framing checks

“Is this [draft / proposal / decision] consistent with what we’ve decided?” “Does this contradict anything in the perspective?”

Paste your draft into the chat and ask. The AI runs it against the graph and flags claims that don’t match what’s in the perspective. Fast way to catch drift before you ship.

When to use cite vs. introspect vs. just asking

Three tools, three different things:

  • Just ask — most of the time. The AI loads the perspective at the start of the session and grounds responses in it automatically.
  • Cite — when you want to verify a specific text. Paste the text in and ask the AI to cite it. You get back a coverage score and per-claim provenance — what’s supported, what’s ungrounded.
  • Introspect — when you want to know what’s there and what isn’t. Ask “what does the perspective know about [domain]?” or “what gaps exist in [area]?” — the AI walks the graph and reports.

If you’re not sure: start with a regular question. If the answer feels off, follow up with “cite that” or “what’s missing from your answer?”

Reading the answer

Answers from a grounded AI look different from generic AI output. Things to notice:

Citations. Every substantive claim should have a footnote tracing back to a memory. Click through to verify; the source is the truth.

Conviction signals. The AI weighs how settled each claim is. If you see hedging like “still being figured out” or “this is the working assumption” — that’s a signal the underlying claim is at notion or claim level (less settled), not decision or principle level (more settled). Take it accordingly.

Honest gaps. A grounded AI will tell you what it doesn’t know. If you ask about something the perspective hasn’t covered, you should get an explicit “the perspective doesn’t cover this” rather than a confident-sounding made-up answer. If you don’t get that — and the answer feels generic — there’s a setup issue worth checking (most often the Claude Desktop project gotcha, see FAQ).

“I asked something and got a weird answer”

Common causes, in order of frequency:

  1. The Claude Desktop project gotcha. Custom instructions aren’t loaded, or the chat was started from the home screen instead of inside the project. See FAQ.
  2. The perspective doesn’t cover what you asked. The AI should tell you this; if it didn’t, ask: “How well-grounded is that answer? What memories did you cite?”
  3. The question was ambiguous. Reframe with more specifics. “What’s our pricing strategy” is broader than “What did we decide about pricing for the consultancy tier last quarter?”
  4. The perspective contains the wrong thing. Memories drift, decisions get superseded. If the answer cites something that’s no longer true, that’s a forget or new-remember opportunity — flag it to your team lead, or save a corrective memory yourself.

Save what’s missing

If you ask something and the perspective doesn’t have a good answer, you’ve found a gap. Two options:

  • You know the answer. Use remember to save a memory capturing the missing knowledge. The AI will help draft it from your conversation. Someone (your team lead, or you) reviews and merges; the perspective grows.
  • You don’t know the answer. Flag the gap to whoever does — usually the person who’d be the source. The next time they’re in a session, the AI can prompt them to fill it.

Either way, the act of asking surfaces what’s missing. The perspective gets richer the more it’s used.



This site uses Just the Docs, a documentation theme for Jekyll.