Navigating AI Disruption: A Framework for Strategic Positioning

Navigating AI Disruption: A Framework for Strategic Positioning

So yeh. Let me walk you through how I’m thinking about AI disruption and what it means for software companies trying to figure out where to place their bets.

There’s this fascinating debate happening right now across enterprise software about what survives the shift to AI agents. Jamin Ball argues that systems of record aren’t dying—they’re just getting raised standards.

Jaya Gupta and Ashu Garg counter that the real prize isn’t the data itself, but the decision traces that explain how that data led to action—the "why" behind the "what."

Meanwhile, Databricks and Snowflake are positioning as "truth registries" while startups like Regie (which launched RegieOne as an AI-native orchestration platform) and PlayerZero (which instruments orchestration layers to capture decision traces) are betting on the orchestration and context capture layer.

Who’s right? Maybe everyone. And maybe no one.

That’s the uncomfortable reality of this moment: we’re in a phase transition where multiple futures are simultaneously plausible. The companies that thrive won’t be the ones who bet correctly on which future materializes. They’ll be the ones who position to profit regardless of which execution path wins.

Here’s a framework I’ve been developing for thinking about that. It builds on Clayton Christensen‘s work on disruptive innovation, Michael Porter‘s strategic positioning framework, and real options theory from finance—but applies them to the specific challenge of technological phase transitions.

The Mental Models Behind This Framework

I’m synthesizing three key approaches here:

  1. First Principles Thinking – Strip problems to fundamental truths rather than reasoning by analogy (the approach Elon Musk advocates)
  2. Real Options Theory – Maintain optionality under uncertainty by structuring investments as portfolios of strategic options
  3. Strategic Positioning – Porter’s work on sustainable competitive advantage through value chain analysis

The insight comes from combining these: in phase transitions, sustainable advantage comes not from predicting the future but from positioning to profit under multiple scenarios.

The Framework: Four Principles for Navigating Disruption

1. Understand the Domain

Before asking "what AI features should we build?" companies need to ask a more fundamental question: what is this domain actually for?

I’m using first principles thinking here—the approach that asks us to break down problems to fundamental truths rather than reasoning by analogy.

This sounds obvious, but it’s where most AI product strategy goes wrong. Teams start with "let’s add AI to X" when they should start with "what fundamental human or business need does X serve, and how might AI change how that need gets met?"

Take commerce as an example. The first-principles view might look like this:

Historical frame: People discovered products through catalogs and stores, compared options manually, transacted through cash or checks, and trusted brands because of scarcity of information.

Current frame: People discover products through search and feeds, compare through reviews and aggregators, transact through credit cards and digital wallets, and trust platforms because they intermediate reputation.

Emerging frame: Agents discover and compare products on behalf of people, transact through intent-matching protocols, and trust becomes a function of verifiable decision traces rather than brand reputation.

If that third frame is directionally correct, then a lot of what we currently build—product pages optimized for human browsing, checkout flows designed for human attention, marketing campaigns targeting human psychology—might be the wrong unit of investment.

Let’s dial this to an extreme to test the logic: What if 100% of commerce transactions tomorrow were agent-initiated? Zero human browsing of product pages. What breaks? What becomes essential? When I run that thought experiment, product catalogs still matter (agents need data), trust signals still matter (agents need to evaluate reliability), and transaction primitives still matter (money still needs to move). But UX design, marketing copy, and checkout flows? Those might be completely irrelevant investments.

The point isn’t to predict the exact future. It’s to understand the domain deeply enough to recognize which parts are durable and which parts are execution-path dependent.

What’s durable in commerce: Merchants have goods and services. Customers have needs. Some intermediary has to match them efficiently and build enough trust that transactions happen. Value gets exchanged.

What’s execution-path dependent: Whether that matching happens via human browsing, AI agents, or something else. Whether trust comes from brands, platforms, or cryptographic proofs. Whether transactions look like shopping carts or intent protocols.

2. Own the Primitives

If we can’t predict execution paths, we should invest in the primitives that any execution path will need.

Scott Belsky recently made an important observation: as data connectors between apps become ubiquitous, ordinary data stops being a moat. Every company is trying to sync everyone’s data, and doing so is becoming easier than ever. What becomes valuable instead are three things: graphs (who knows who, who works with whom, who has access to what), portable memory (persistent personalization that travels with the user), and real-time data (live signals that can’t be historically replicated).

This reframes what "owning the primitives" means. It’s not about hoarding data—it’s about owning the relationships and context that take years to build and can’t be easily copied.

This connects to Clayton Christensen’s work on modularity—when interfaces standardize, value migrates to components that can’t be easily replicated. In this case, those components are relationship graphs and context memory, not raw data.

In the systems of record debate, there’s a telling phrase from Ball’s piece: "The need for a contract that says ‘this is the truth, and here is how you are allowed to change it’ only increases." Whether agents interact with CRMs, warehouses, or some new orchestration layer—they all need canonical data about what exists, what’s been promised, and what’s allowed.

For commerce specifically, the primitives might include:

  • Products and inventory: What exists to sell, what’s available, what are its attributes. This is canonical regardless of how discovery happens.
  • Orders and transactions: What was promised, what was delivered, what was paid. This is the source of truth regardless of what agent or human initiated it.
  • Customer identity and history: Who bought what, what do they prefer, what’s their relationship to the merchant. This is the context layer that agents desperately need—and it’s the graph that takes years to build.
  • Merchant capabilities: What can this merchant actually do—same-day delivery, specific geographies, particular product categories. This is the capability registry that intent-matching systems need.
  • Relationship graphs: Who buys from whom, how often, through what channels, with what patterns. This is the relationship data that can’t be replicated by syncing databases.
  • Purchase context and memory: Not just transaction history, but why customers made decisions—the preferences, constraints, and patterns that enable real personalization.

Owning these primitives means more than just storing the data. It means being the authoritative source, having the canonical schema, and being the place that other systems—whether human-facing or agent-facing—have to integrate with.

3. Be Ambivalent About Execution Path

This is the uncomfortable strategic discipline: companies need to invest in multiple execution paths without knowing which one wins.

Consider the current debate about where AI agents sit:

Scenario A: Agents sit on top of existing systems of record. CRMs, ERPs, and billing systems remain canonical. Agents just get better at reading from and writing to them. Value accrues to whoever has the best API surface and semantic layer.

Scenario B: Agents sit in a new orchestration layer. Cross-system workflows become the execution path. Decision traces become the new system of record. Value accrues to whoever captures the "why" of decisions, not just the "what" of outcomes.

Scenario C: Something weirder happens. Maybe intent protocols like Uniswap’s cross-chain model spread beyond crypto. Maybe AI-to-AI transactions make human-designed UX irrelevant. Maybe privacy-preserving computation changes what data can be centralized.

Here’s a thought experiment to test scenario C: Imagine we had an artificial constraint where companies could ONLY use open protocols, no proprietary platforms. What becomes possible? What breaks? If agents can freely transact across intent networks without platform intermediaries, does that fundamentally change where value accrues? My instinct is that even in this extreme case, someone still needs to be the canonical source for "what products exist" and "what was actually promised." The protocols might handle matching and settlement, but they can’t manufacture trust or canonical truth.

A company with conviction would bet heavily on one scenario. A company with wisdom would maintain optionality across all three while developing clear criteria for when to commit.

Companies might hedge across scenarios by:

  1. Scenario A hedges: Investing in API surfaces that agents can talk to. Making primitives easily accessible and well-documented. Ensuring whatever orchestration layer emerges has to route through them for canonical data.
  2. Scenario B hedges: Capturing decision traces. Understanding context—not just what users or merchants decide, but why they make decisions. If decision lineage becomes valuable, having built that capability early matters.
  3. Scenario C hedges: Supporting experiments at the edges. Intent-driven protocols. Agent-to-agent transaction systems. Privacy-preserving personalization. These might be too early—but if one turns out to be right, having learned rather than starting from scratch is valuable.

4. Profit Regardless of Execution Path

This is where the framework becomes concrete. For each major area of investment, companies should be able to articulate how they profit under different scenarios.

Take payments as an example:

  • If agents just use existing systems: Merchants still need payment processing. Transaction fees accrue regardless of how they’re initiated. Agent-originated transactions might even have higher volume and lower fraud.
  • If a new orchestration layer emerges: Payments are still the moment of truth—the irreversible action that commits resources. Payment processors remain in the execution path wherever that layer sits.
  • If intent protocols win: Payment becomes a settlement layer for matched intents. Payment processors become the on-ramp from commerce intents to actual money movement.

Three scenarios, three stories for why the business model works. That’s the kind of positioning to aim for.

What This Means in Practice

Principles are nice. But here’s how I’m thinking about this practically—what would I actually do if I were building in one of these spaces?

Invest in Primitives with Execution-Path Flexibility

Here’s the concrete play:

  • Canonical data: Be the source of truth for your domain’s core entities. Ensure your data model is clean enough that any orchestration layer—yours or someone else’s—can build on top of it.
  • API-first architecture: Human dashboards become one interface among many. The primitives need to be accessible programmatically.
  • Decision trace infrastructure: Start capturing the "why" of decisions, not just the outcomes. If the orchestration layer becomes the prize, this capability compounds over time.
  • Graph and memory infrastructure: Build systems that understand context and relationships. As Belsky notes, proprietary graphs (who knows who, who works with whom) and portable personalization become the new moats as data connectors proliferate. Whoever owns the relationship graphs and context memory has a compounding advantage.

Maintain Optionality Through Staged Bets

  1. Stage 1 (now): Pragmatic API compatibility. Make it easy for agents and other systems to integrate. This pays off in all scenarios.
  2. Stage 2 (12-18 months): Orchestration layer experiments. Intent-driven prototypes. Agent-to-agent transaction pilots. These might not work, but they build capability and learning.
  3. Stage 3 (contingent): Based on what you learn, either commit hard to a new model or double down on being the best system of record for your domain’s primitives.

Create Explicit "If/Then" Decision Criteria

Companies should articulate in advance what signals would cause them to shift investment. Something like:

  • If agents start originating more than X% of transactions in your data, then shift investment from human-facing UX to agent-facing APIs.
  • If major platforms start building against a common API standard, then accelerate efforts to avoid ecosystem isolation.
  • If decision-trace-based services start commanding premium valuations, then accelerate from basic analytics into full context graphs.

The point isn’t to predict correctly. It’s to have a plan for how you respond to different evidence.

Support the Uncomfortable Experiments

Some of what might matter most won’t look like responsible product investment today:

  • Intent protocols: What if transactions aren’t forms but declared outcomes? "I want this thing, here are my constraints, make it happen."
  • Agent-as-customer infrastructure: What do services look like when the "customer" is an AI agent acting on behalf of a human? Different discovery, different trust signals, different flows.
  • Privacy-preserving systems: Companies like Cloudflare are working on abstracting personal data into mathematical representations for LLMs. If this works, it changes the personalization/privacy tradeoff fundamentally.

These are long-shot bets. But strategic resilience comes from having people exploring the edges while the core business executes on more certain paths.

Where I Might Be Wrong

This framework assumes a few things that could turn out to be incorrect:

I’m assuming that AI capability continues to improve at current rates and that economic incentives favor automation. If either assumption breaks down—if we hit a capability plateau or if regulatory barriers prove stronger than I’m modeling—then the timeline for this transition extends significantly, and companies have more time to adapt.

I’m assuming that the primitives I’ve identified (canonical data, relationship graphs, context memory) are actually durable. But Christensen’s disruption theory teaches us that what seems essential in one paradigm often becomes irrelevant in the next. Maybe there’s a way for agents to bootstrap trust and context that doesn’t require these centralized primitives at all.

I’m assuming companies have the resources to maintain optionality across multiple scenarios. Smaller companies might not have that luxury—they might need to make bigger, more concentrated bets. This framework might be most applicable to companies with sufficient scale and capital to hedge.

An alternative view would emphasize speed over optionality. Maybe the winners won’t be those who position carefully across scenarios, but those who move fastest toward whichever future emerges first. I’m betting on positioning and adaptability, but pure speed and conviction could prove more valuable.

I could be wrong about all of this. But if the pattern is real, the implications for how companies should think about AI investment are significant.

The Meta-Point

The debate about systems of record vs. orchestration layers vs. intent protocols is probably not the right frame. All of these might coexist, and the winners will be companies that:

  1. Understand their domain deeply enough to know what’s durable
  2. Own the primitives that any execution path needs
  3. Maintain optionality rather than betting on one future
  4. Structure investments so they profit under multiple scenarios

That’s not a strategy of hedging for fear. It’s a strategy of building capability and positioning so that when the fog clears, you’re able to move fast in whatever direction turns out to be right.

The pace of change is accelerating, and here’s what’s becoming clear to me: the ability to optimize resource allocation—time, energy, capital, attention—becomes the key differentiator. Those with clear domain models, effective coordination, and decision-making clarity will outperform those without.

So yeh, we’re basically flying blind here on which specific execution path wins. But that doesn’t mean we’re helpless. We can build capability, maintain optionality, and position ourselves to profit regardless of which future emerges. That’s not hedging out of fear—it’s strategic discipline in the face of genuine uncertainty.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.