Context Compounds

Mar 3, 2026

In eight months at Ascend, we shipped:

  • A full rebrand including a new visual identity and new messaging
  • A business model overhaul with new pricing, packaging, and segment strategy
  • A Claude-native growth engine running across Meta and LinkedIn ads (~5x ROAS), email and LinkedIn outbound, warm introductions and partnerships
  • An automated flight search tool that explores 1,000+ routing permutations per search and can find deep discounts on cabins through creative routing strategies
  • An end-to-end customer trip booking platform covering search through to ticketing
  • An EOS framework to manage our company's goals, priorities and task management
  • A financial and operating model rebuild covering 30+ sheets of P&L, cash flow, and scenario planning

Each project was faster than the last, because each one's output became context for the next.

ICP ResearchRebrand
6 customer segments identified from 4,582 bookings drove the entire rebrand — messaging, positioning, and visual identity all mapped back to who the research said we were serving.
RebrandGrowth Engine
Real customer language extracted from sales call transcripts became the ad copy running across Meta and LinkedIn at ~5x ROAS.
EOS ScorecardFinancial Model
Growth targets reviewed every Monday became the inputs the financial model consumed to project P&L and cash flow forward.
EOS ScorecardWeekly Reports
Structured decision logs from every L10 meeting fed directly into AI-generated weekly growth reports.
L10 MeetingsRoadmaps
Issues discussed in L10 meetings became the source of our product and automations roadmaps.
Each project's output became context for the next

How context compounds

Two layers make this work: a context layer and an execution layer. They feed each other. Better context produces better execution, and every execution enriches the context for next time.

Context layer
Structured markdown files covering company identity, product, customers, financials, and decisions. Auto-updated through Fireflies transcription, weekly EOS meetings, and structured meeting formats. The background knowledge that every AI session starts with.
Execution layer
Slash commands, skills, sub-agents, and automations that consume the context and do real work. /daily-ad-review, /client-trip-update, /scorecard. Fireflies to Notion to Asana to Slack pipelines. Each execution produces structured output that feeds back into the context layer.
Each layer feeds the other — better context produces better execution, and every execution enriches the context

We build both layers on Claude Code, and the primitives around it are what make the compounding work:

  • CLAUDE.md files give every session persistent project context
  • Skills encode reusable workflows as full operational playbooks
  • Slash commands turn recurring operations into one-line invocations
  • Sub-agents with specialist roles handle different parts of the codebase or different domains
  • Hooks wire up automated checks and processes that run on every action

The context layer

The context layer is structured markdown — company identity, product specs, customer research, financials, decision logs — all stored in files that any AI session can load directly. Fireflies transcribes every meeting. The L10 — the weekly leadership meeting at the heart of EOS — produces a record of every major issue discussed and resolved, in a format that both humans and AI can consume.

Each working directory has a CLAUDE.md file referencing separate context files by domain. Brand guidelines, ICP definitions, and scorecard targets all live as markdown alongside the code. The directory structure itself becomes the documentation.

The execution layer

The execution layer consumes all of this context and produces structured output that feeds back in. Slash commands like /daily-ad-review and /weekly-growth-report generate reports informed by the full context layer. Skills hold operational playbooks that engineers and non-engineers alike follow. Sub-agents handle specialist domains. And beyond Claude Code, Make and n8n automations connect the broader tool stack so that every tool can feed context to every other.

Meeting transcripts stored as structured docs
Tasks created from decisions and action items
Notifications when tasks complete or update
Lead alerts fire within 5 minutes
Client updates and trip notifications
AI reads and writes company context
Orchestrated by Make and n8n automations

The guardrail is matching the review process to the blast radius. Internal tools where a human reviews the output before it reaches a client can be built by anyone with domain knowledge, and non-engineers regularly submit prototypes built with Claude Code. Engineering's role is shifting towards planning, reviewing, and educating the rest of the team on how to build well. Payments, client data, and authentication stay engineering-led with proper review and testing pipelines. As the context and tooling matures, the boundary shifts.

The case for smaller, context-rich teams

Adding a new person to a team means onboarding from zero. They need to learn the company, the product, the customers, the codebase, the processes, and the history of decisions already made. There's a significant lag between joining and contributing meaningfully, and even then humans don't retain context perfectly — they forget details, miss updates, develop blind spots. The traditional response is more people, more meetings, and more documentation that nobody reads.

Machine-readable context changes this equation. An AI session starts with perfect recall of everything in its context window: the full V/TO, current rocks, scorecard targets, recent decisions, brand guidelines, codebase structure. It doesn't need onboarding and it doesn't forget. The context gets better every week as new decisions are logged, new metrics reviewed, and new outputs produced.

The compounding curves are different. A new team member improves roughly linearly over months. Machine-readable context improves with every session, every meeting transcription, every decision logged, and every improvement benefits every future session across the whole company simultaneously.

This doesn't mean replacing people. It means the optimal team is smaller than you'd expect, with more investment in documentation, planning, and keeping everything organised than tradition would suggest. The incremental return on machine-readable context is higher than the return on human-readable context in many cases, because machines actually consume it consistently, where humans rarely read and retain documentation as thoroughly.

Where we are now

Ascend is at $27.6m ARR with 38% growth. Paid acquisition channels are profitable and scaling. Operations run on EOS with structured documentation feeding every AI session. Internal tools are built by non-engineers using Claude Code. The financial model and scorecard are the single source of truth for planning and reporting.

The thesis holds: invest in the context layer, invest in the execution layer, and they compound on each other. Every quarter of structured documentation makes every AI session more effective. Every new slash command or skill produces output that enriches the context. The flywheel turns.

If you want to chat about any of this, reach out at omarismailb@gmail.com.