In structured finance, securitisation is the process of transforming an illiquid pool of assets (like a bank's mortgage book) into tradeable securities. The key innovation lies in decomposing a single bundled position into standardised 'tranches': discrete instruments with distinct risk and return profiles that can be independently evaluated, priced, and exchanged in secondary markets.
Rather than existing as one opaque monolith on a balance sheet, thousands of underlying loans are effectively “sliced” into differentiated claims tailored to different investor preferences and risk appetites. Importantly, the underlying mortgages themselves are unchanged by this operation. All that changes is the structure through which they are owned and traded. By unlocking the assets from the bundle and repackaging them into modular units, securitisation increases market optionality, liquidity, and price discoverability.
This is structurally what happens when a monolithic software product is dissolved into its behavioural affordances, as in the flow of agentic commerce. The unit of economic value shifts from access to a product to reliable execution of a behaviour. Per this analogy, the agent orchestration layer is the structuring desk that assembles them into workflows: collateralised chains of capability with layered reliability profiles.
Making behaviours independently addressable means they (presumably) get priced on outcome rather than access, not per seat/month but per successful unit of execution. Secondly they become composable by non-technical actors, because the integration layer moves from APIs to language; in essence agents are generative UI. Third, they develop quality gradients independent of the parent product: affordances can be evaluated on their own terms, alongside deliverability, personalisation quality, response rates, and so on, rather than as an appendage of a monolith whose brand is taken as a proxy for component quality. How that plays out is beside the point of the nature of the shift itself. Another important and obvious effect is that substitution becomes trivial, so the moat cannot be in the behaviour itself. It moves either to proprietary data that makes the behaviour better, or to orchestration, the meta-capability of knowing which behaviours to compose and in what order.
Klein and Wieczorek (The Headless Firm: How AI Reshapes Enterprise Boundaries, 2026) liken the architecture this decomposition produces as an 'hourglass' ⏳: a thin commoditised protocol 'waist', with differentiation shifting up into personalised intent/down into deep vertical execution. This is a continuation of the idea that emerged around 2022 (e.g. ScaleAI) informally through the mantra of_"text as the universal interface"_ (a reappropriation of the Unix philosophy).
Not every boundary is a viable place to cut. Baldwin's work on modularity identifies that decomposition works where the interface between components is simple, well-defined, and fully specifiable without shared context, or 'thin' interfaces, which are where new markets form. Decomposition fails where components are deeply interdependent and require continuous access to each other's internal state.
The protocol layer's value is thus in being thin and universal, not thick and proprietary. It is the secondary market that makes the tranches liquid (drawing the flow through the hourglass), and the authors observe that in prior modular architectures the cost of coordination scaled with the topology of interactions between components, roughly quadratically. In protocol-mediated agentic systems it scales with throughput.
This therefore determines how finely you can tranche: when coordination overhead is combinatorial there is a floor below which decomposition costs more than it is worth, and when it is linear that floor drops.
Affordances can be separated where inputs and outputs are self-contained. They resist separation where a capability depends on accumulated context, tight feedback loops, or shared mutable state. The discipline here lies in knowing which is which.
One structural divergence from financial securitisation sharpens rather than weakens this overall analogy. In finance, securitisation packages assets that exist independently of the structuring process; a mortgage is a mortgage whether or not it has been tranched. That is to say, the structure is informationally complex but ontologically inert: it does not alter the behaviour of the underlying assets, only their distribution of claims.
In agentic software the orchestration layer partially constitutes the behaviour it orchestrates. A capability is not merely selected and routed; it is co-constituted by the system that invokes it. Reliability, latency, and output distribution shift depending on invocation context, tool availability, prompt history, and model state. The underlying "asset" is therefore not stable in advance, but instantiated at runtime as a contingent event.
This introduces a stronger form of opacity than in traditional securitisation. Financial opacity is primarily epistemic: the assets exist, but their aggregation makes them difficult to inspect, correlate, or price. Agentic opacity is partly ontological: the thing being composed does not fully exist prior to its composition, and so cannot be exhaustively analysed independent of orchestration. The system is not just hard to read; it is only partially determinate before execution.
This is closer to synthetic CDOs, the layered derivative instruments central to the 2008 financial crisis, in which the underlying assets were themselves derivatives rather than real loans and the thing being tranched was already contingent on the system that priced it. The parallel to agent ecosystems, where the capabilities being composed are non-stationary and model-dependent, raises the same questions about systemic risk, as to what happens when abstraction layers scale out faster than our ability to reason about their interactions.