This blog post took about 90 minutes to write — not because the ideas are simple, but because Claude Code already had the context from every project described in it. The ICP research, the rebrand, the ad stack, the EOS rollout, the financial model, Guru. Each one produced structured artefacts that became context for the next. By the time I sat down to write this, Claude Code could reference all of it.
That's the thesis of this post, and of the way we've built Ascend over the past eight months.
I joined Ascend (formerly FlyFlat) as COO last August. The company had $20m in annual recurring revenue, 650+ clients including Google Ventures, Ramp, and Bessemer Venture Partners, and a 24/7 travel concierge that people genuinely loved. The brief was to build operational infrastructure and scalable growth — the kind of work that normally requires hiring several specialist teams.
Instead, we embedded Claude Code into the operating DNA of the company. Most companies bolt AI onto existing workflows: "use ChatGPT to draft emails faster." That's fine, but it captures maybe 5% of the value. The real difference is organisational infrastructure — structured documentation, readable codebases, codified operational knowledge — that gives Claude Code the context it needs to be genuinely useful.
There's a conversation forming around this idea. Andrew Chen at a16z, Lenny Rachitsky, and others have started using the term "Claude-native" to describe companies that structure themselves around AI from the ground up. The consensus forming is that the companies that win won't have the best models — they'll have the best context. We've been living that thesis at Ascend for eight months now, and the results speak for themselves: $27.6m ARR, 38% growth, profitable acquisition channels, automated operations, and internal tooling built by non-engineers.
The work falls into three pillars — Growth, Product, and Operations — with Claude Code as the connecting layer underneath.
The connecting thread across all three is compounding: each project produces structured artefacts that become context for the next one.
Growth: from word-of-mouth to a programmatic engine
When I joined, 95% of revenue came from word-of-mouth and community partnerships. Zero scalable channels. No paid ads, no email outbound, no programmatic acquisition of any kind. The kind of growth profile that works beautifully until it doesn't.
We built the growth engine in three stages, each one feeding directly into the next.
Stage 1: ICP research. We analysed 4,582 bookings to find where value was concentrated. Roughly 75% of revenue came from executive assistants at PE, VC, hedge fund, and family office firms. We enriched our top 500 customers using Firecrawl and built six targetable prospect segments from the data.
Stage 2: Brand repositioning. The ICP research exposed a gap — we were positioning as a discount flight service, but our customers were private equity partners and executives at Google Ventures. We pulled every sales call transcript from Fireflies and ran them through Jobs to Be Done analysis. Three distinct personas emerged, each with fundamentally different motivations. Their language became the brand voice. Their pain points became the creative angles.
Stage 3: Programmatic execution. With segments defined and messaging aligned, we built the full acquisition stack — Meta ads (creative-led, broad targeting), LinkedIn ads (identity-led, job title targeting), outbound via HeyReach and Instantly, and a HubSpot CRM rebuilt from scratch with near-100% source attribution.
Six months later: $27.6m ARR, 38% growth, ~5x return on ad spend after two months.
The Claude Code angle is worth spelling out. There is no growth team. The entire operational playbook lives in an ~880-line Claude Code skill called ascend-ads. Recurring operations run as slash commands — /daily-ad-review pulls performance data and flags underperformers, /weekly-growth-report generates cross-platform reports used directly in investor updates. This works because the ICP research produced structured customer profiles and the rebrand extracted real customer language — both became persistent context that Claude Code references in every session.
The ICP analysis produced six targetable segments. Those segments became the targeting on LinkedIn and the creative themes on Meta. The rebrand extracted customer language from sales call transcripts. That language became the ad copy. Each layer was input to the next.
Product: from manual flight search to automated concierge tools
Guru
Guru is our automated flight search tool. Natural language in, ranked flight options out. It explores 110+ routing permutations across multiple providers simultaneously — running up to 90 concurrent browser sessions and returning results in about 4 minutes, compared to the 20–30 minutes an agent would spend searching manually. Savings of up to 70% vs retail business class fares.
The spectrum of sensitivity
Building Guru surfaced an insight that now governs how we think about product development: not all products need the same level of engineering rigour.
Ascend operates on a spectrum of sensitivity. At one end — internal tools like Guru, reporting dashboards, data enrichment scripts. These are low sensitivity and high iteration speed. A non-engineer can build them in Claude Code because the worst case is a wrong result that a human reviews before sending. The blast radius is small, the feedback loop is fast, and the cost of a mistake is measured in minutes, not dollars.
At the other end — payments, client data, authentication, the core booking system. These are high sensitivity and engineering-led. Built with proper review, testing pipelines, and staged deployment. The blast radius is large and the tolerance for error is near zero.
The guardrail isn't "don't let non-engineers build." It's "match the review process to the blast radius." Over time, as AI tooling matures, the boundary shifts: engineers focus on architecture and review while the construction layer opens up to anyone with domain knowledge and clear requirements.
Making a codebase AI-readable
What makes a codebase work well with Claude Code: TypeScript monorepo (Guru is 18 packages), clear package boundaries, descriptive naming. The structure itself becomes documentation. When Claude Code reads the codebase, it understands not just what the code does but how the system is organised — which makes it dramatically faster at building the next thing.
This feeds the prototyping culture. Claude Code makes it fast to build a working prototype, test it with real data, and decide whether to invest engineering time in hardening it. Most internal tools start this way: a rough version built in a day, validated against real use cases, then either discarded or promoted to production.
Guru was the first product built this way. The second was faster because the patterns were established. By the third, the codebase itself had become a knowledge base that Claude Code could reference.
Operations: EOS as the context layer
EOS
When I joined, Ascend had $20m ARR but virtually no operational structure. No accountability chart defining who owns what. No weekly metrics beyond top-line revenue. No decision logs, so the same conversations happened repeatedly. No shared priorities, so teams worked on different things without alignment.
We implemented EOS — the Entrepreneurial Operating System. A V/TO (Vision/Traction Organiser) for company identity. Rocks for quarterly priorities. A Scorecard for weekly metrics. L10 meetings for structured decision-making. The usual benefits followed — clarity, accountability, focus.
The unexpected benefit was what it did for AI. Every EOS document is structured, updated on a predictable cadence, and maintained as good management practice regardless of whether AI exists. A V/TO has defined fields. Rocks have owners, deadlines, and binary outcomes. The Scorecard has numbers with targets and owners. Decision logs have context, reasoning, and action items. This makes every EOS artefact perfect AI context without any extra work. You don't document things for AI — you document them because that's how you run a company well. The AI benefit is a free externality.
The financial model
We built a comprehensive financial and operating model — 30+ sheets in Excel covering P&L, Balance Sheet, Cash Flow Statement, three full scenarios (Downside, Base, Upside), and detailed assumptions for revenue, costs, team, and cash.
The revenue model breaks down transaction revenue (flights, hotels, ground transport) and subscription revenue (individual at $300/month and enterprise tiers), with supply mix modelling across Consolidator, Broker/Points, and Direct channels with take rates for each. Three complete scenarios project from FY2025 $2.5M net revenue to FY2026 $8.2M in the base case, with full EBITDA and net income at each level.
Team assumptions are modelled at the individual level — named roles with start dates, team assignments across Leadership, Concierge, Engineering, Product, Growth, and Operations. The model is the single source of truth for headcount planning. Monthly and annual KPI dashboards, sensitivity analysis, and Rule of 40 tracking round it out.
One detail worth mentioning: the model includes a "Claude Cache" sheet with =CLAUDE formula functions. Claude Code literally built formulas inside the model during construction. It's a small thing, but it's indicative of how deeply embedded the tooling is.
The financial model maps directly to the EOS Scorecard — when the L10 reviews metrics on Monday, the model provides the "what does this mean for the quarter" context. When running growth reports, Claude already knows the targets because they're in the model.
The automation stack
Beyond EOS and the financial model, we run a Fireflies → Notion → Asana → Slack pipeline that captures, structures, and distributes operational knowledge automatically. Every meeting is transcribed, summarised, and routed to the right project. Operations generates structured data as a byproduct of running well. That data feeds every AI session.
The EOS documentation — V/TO, Scorecard, decision logs — is the connective tissue. Growth sessions reference the ICP definitions. Guru references client preferences. The financial model uses Scorecard targets as inputs. One documentation system feeds every AI tool in the company.
The compounding effect
This is where it all comes together.
Each project produced structured artefacts that became context for the next. The ICP research produced customer segments. Those segments informed the rebrand. The rebrand produced personas and brand voice. That voice became the ad copy. The EOS Scorecard defined the growth targets. Those targets became the benchmarks in the weekly growth reports. The financial model consumed the Scorecard numbers and projected them forward. The pattern repeats across every combination of these projects.
The speed tells the story. The ICP analysis took two weeks. The rebrand took three. The full ad stack was live within a month. Each one was faster because the context accumulated. By the time we built the ad stack, Claude Code already knew our customer segments, brand voice, creative angles, and growth targets — all from previous projects.
This doesn't work by accident. It works because three structural conditions are met. First, documentation is structured — EOS enforces consistent formats for goals, metrics, and decisions. Second, the codebase is readable — a TypeScript monorepo with clear package boundaries that Claude Code can navigate. Third, operational knowledge is codified — Claude Code skills hold the full playbooks for recurring processes. Remove any of those and the compounding breaks. You'd still get value from individual AI sessions, but you'd lose the flywheel where each one makes the next one better.
For other companies thinking about this: AI's value at a company level is a function of how well the company documents its own decisions, goals, and knowledge. The model doesn't matter as much as the context you give it. Companies that run well — that document decisions, maintain structured goals, keep their codebases readable — are already 80% of the way there. The last 20% is making that context accessible to the tools.
Which brings it full circle. This post took about 90 minutes to write because Claude Code had the context from every project described in it. The ICP segments, the brand voice, the EOS structure, the financial model, the Guru architecture — all of it was already there, structured and accessible, from the work itself. That's the compounding effect in practice.
Where we are now
Ascend is at $27.6m ARR with 38% growth. Paid acquisition channels are profitable and scaling. Operations run on EOS with structured documentation feeding every AI session. Internal tools are built by non-engineers using Claude Code. The financial model and Scorecard are the single source of truth for planning and reporting.
We're still early. The next phase is expanding the product suite — more concierge automation, more client-facing tools, and deeper integration between the operational layer and the AI layer. The thesis stays the same: structure your company well, document your decisions, and the AI gets better with every project.
If you want to chat about any of this, drop me an email at omarismailb@gmail.com.