Every AI tool your team uses is only as good as the context behind it.
The AI industry has started calling this discipline "context engineering" — the art of getting the right information to AI systems at the right moment. Andrej Karpathy describes it simply: the LLM is the CPU, and the context window is the RAM. The challenge is not getting data in. It's getting the right data in, structured correctly, at the right moment.
The common assumption is that integrations solve this. Every AI platform now offers connectors — plug in Slack, Notion, Google Drive, your CRM, and the model can see everything. But access is not context. Giving AI access to 50,000 Slack messages and 200 stale Notion pages is not the same as giving it a clear picture of who you are, what matters this quarter, and what was decided last Monday.
Connectors solve the access problem. They don't solve the structure problem.
At Ascend, we solved this almost by accident.
Structured operations as AI infrastructure
When I joined Ascend as COO, the company had $20m ARR, 650+ clients, and no operational structure. No shared metrics, no decision logs, no documented priorities. I implemented EOS — the Entrepreneurial Operating System — to fix this.
What I didn't expect: EOS produces exactly the kind of structured, current documentation that AI needs to operate effectively. Every document is structured by design and updated on a predictable cadence. It is, in effect, a context engineering framework.
The full system cascades from permanent company identity down to weekly decisions, each layer getting more specific and more frequently updated:
The four context components
Four EOS components do the heavy lifting for AI:
This approach mirrors what Anthropic's engineering team calls "just in time" context — agents maintain lightweight references to information sources and load data dynamically at runtime, rather than stuffing everything into the window upfront. The CLAUDE.md file acts as an index: it points Claude Code to the V/TO, the scorecard, the rocks, the decision log, and whatever project-specific context is relevant. The agent loads what it needs for the task at hand.
A growth session pulls brand guidelines and campaign targets. A product session pulls the roadmap and sprint backlog. A finance session pulls the scorecard and runway metrics — all from the same structured base, without anyone manually assembling the context each time.
The conventional thinking is that structured processes slow you down. With AI, the opposite is true — structure is what makes speed possible, because it gives every tool the context to operate without constant hand-holding.
The weekly loop
The L10 is the weekly leadership meeting at the heart of EOS — and the engine that keeps the context layer fresh. Every Monday, the full senior team sits down for 90 minutes with a fixed agenda. What makes the L10 so useful for AI is the structure: every meeting follows the same format, produces the same outputs, and feeds into the same systems, week after week.
Friday: A Make.com automation sends the L10 Notion document to Slack. Each leader fills in their section before Monday — wins from the week, headlines, and any issues they want the group to tackle.
Monday: The L10 runs. The first 5–10 minutes are silent reading. Then the team spends the bulk of the meeting on the issues list, using IDS: Identify the real root cause, Discuss openly with a time limit, and Solve with a clear decision, owner, and deadline. Every decision gets captured.
Post-meeting: Fireflies transcribes the entire session. This is one of the most underrated parts of the system — you get the full senior leadership team talking in a structured way about their highlights from the week, then spending the bulk of the meeting identifying root causes of issues using IDS. That's incredibly rich context for AI. Not a summary, not someone's notes — the actual reasoning behind every decision, captured verbatim. Decisions are logged to Notion with context. Action items become Asana tasks with owners and deadlines.
Following Monday: The next L10 reviews last week's to-dos, checks the scorecard, and assesses Rock progress. The loop closes.
This is exactly what AI needs — a recurring, structured source of company context that gets richer every week. Before the L10, someone had to write up notes by hand and hope decisions got executed. Now the operating rhythm generates structured data as a byproduct, and that data feeds directly into every AI tool we run.
The context creator
Context engineering is still a new term. At most companies, nobody owns it.
Someone owns the code. Someone owns the data. Someone owns the design system. But the business context that AI operates on — the priorities, the decisions, the metrics, the customer understanding — has no owner. It's scattered across tools and updated inconsistently, if at all.
I think we're close to seeing a new role emerge: the context creator. Not a prompt engineer — prompts are ephemeral. Not a data engineer — this isn't about pipelines and schemas. A context creator structures, maintains, and prioritises the business knowledge that AI systems run on.
At Ascend, this work currently falls to me as COO. But the pattern is clear enough that it could be its own function:
- Maintain the V/TO and ensure it reflects current strategy
- Keep the scorecard honest — right metrics, right targets, right owners
- Log L10 decisions with enough reasoning that an AI session six months from now can understand why
- Curate the context files that Claude Code loads — decide what's in, what's out, what needs updating
- Bridge the gap between what the business knows and what the AI can see
The compounding effect is real. Each week of structured documentation makes every AI session that follows more effective. A growth report that required 30 minutes of context-setting in October now runs as a single slash command, because every target, channel, and benchmark is already loaded.
Whoever fills this role — whether it's a COO, a Chief of Staff, or a dedicated context creator — will quietly become one of the most leveraged people in the company.
Where we are now
We're two quarters into running EOS at Ascend. The L10 has run every Monday since October 2025. The most tangible proof that it's working is in how the Rocks have evolved:
The scorecard has become the team's shared definition of reality — 13 numbers reviewed every Monday, each with an owner and a target. When something goes off track, it surfaces in days rather than months.
The context compounds. Growth automations reference the ICP and brand guidelines from the V/TO. Guru references client preferences documented through the same system. The scorecard defines what "on track" means for every alert and report. Two quarters of structured decisions, priorities, and metrics now sit behind every AI tool we run.
If you're building with AI and looking for a framework to run your company, EOS is worth serious consideration. The accountability and focus are valuable on their own terms. But the structured documentation it produces — current, owned, and updated on a predictable cadence — is exactly what turns AI from an impressive demo into genuine operating leverage.