I recently trained a graph attention network to learn to route journeys on the London Underground, a model that takes an origin and destination, encodes the network's topology and temporal dynamics through learned attention over the graph, and produces a next-hop routing policy. At each station it selects the next stop most likely to yield the fastest journey, having learned line frequencies, interchange costs, and expected waiting times. Trained end-to-end on demonstrations from shortest-path algorithms, the model matches or closely approaches optimal routes. It works, but under the framework we've been discussing it is entirely unsecuritised: a single-purpose model, trained to do one thing well, with its learned representation of the network and its routing policy entangled in the same weights. An orchestration layer cannot call on its understanding of network topology separately from its routing objective, because the two are fused. To use a tool metaphor, it's a screwdriver with a fixed head, against the grain of the reconfigurable tools an agent wants.
This is not just figurative, a neural network's output components are called heads, and the move from a single fixed head to multiple swappable ones is a concrete design decision with direct consequences for how the model can be deployed, evaluated, and composed. There are a bunch of approaches though, experts, adapters, but here I'm mainly interested in heads.
This is a bit of an inversion, but I think it's a useful demonstration that the securitisation framing isn't just analogical. It's an operational idea whose implications would presumably help you reason about how to redesign existing data products for agents.
In the tubeulator model's case the limitation becomes clear once you ask what else an agent might want to configure that the network could support. When I tried to do this (and asked others), I had to make sure to specify that this couldn't include changing the scope of the problem. For example, not introducing external services, because that's a matter of orchestration; solely the ways a journey planner might be decomposed more finely.
An obvious example would be a policy to minimise transfers rather than time, with a configurable amount of walking at the origin/destination. I sometimes take a night bus home from Central London, and I hate waiting around for interchanges (essentially because this introduces a big element of uncertainty). I'd rather have a route 90% of the way in one leg than have to change.
Another aspect which is relevant day-to-day is how to make the model robust to service closure, both line-wide, partially, and individual stations.
It's actually somewhat hard to even consider how to express desired configurability for journey planning because existing offerings have been so fixed, or the config they do offer is frequently irrelevant to your goal (that you can't express within the app).
Each would benefit from the same underlying representation, but that representation is entangled with the single routing objective it was trained on, and building any alternative means retraining from scratch. This is the unsecuritised mortgage book: value locked inside the bundle, inaccessible to anyone who wants a different instrument written on the same underlying assets.
What securitisation demands here is an architectural separation. The network representation, learned via self-supervised objectives over the graph and timetable data encoding topology, connectivity, and temporal dynamics, becomes the shared foundation: expensive to produce, general-purpose, slowly-changing. The routing policy becomes one lightweight head among several that consume the same representation, each optimising a different objective. Fastest route, fewest transfers, resilience to disruption, accessibility: each head consumes the same learned embedding and produces a different judgement from it.
The interface between representation and any given policy head is a fixed-dimensional embedding, a tensor of known shape produced by the encoder and consumed by whichever head is active. This is a thin interface in Baldwin's sense: simple, well-defined, and self-contained. A policy head needs the output embedding and nothing more. Inside the encoder the situation is the opposite; the representation of any one station depends densely on every other, because lines share interchanges and passengers substitute between them, and decomposing there would sever exactly the interdependencies that make the representation valuable. The representation-policy boundary is where the interface is thin enough that separation costs less than fusion.
The cost of this should be stated plainly. End-to-end training lets representation and policy co-adapt, and the monolithic model may outperform the decomposed version on its original task because its encoding has been shaped by and for that specific objective. Securitisation trades some single-task performance for composability: multiple policies on the same foundation, independently evaluable and replaceable, open to heads you did not anticipate. Whether the trade is worthwhile depends on whether the option value of composability exceeds the performance cost of generality. What the agentic turn changes is that calculus; when capabilities must be independently addressable and composable into orchestrated workflows, the option value of decomposition rises, and architectures that were previously rational to keep monolithic come under new pressure to separate.
This is where a connection to generalisation becomes clear. The research into how to make models generalise, self-supervised pretraining, representation decoupling, multi-task transfer, is aligned with what would make a model coordinatable by agents. The architectural choices that produce generality are the same choices that produce composability; both are expressions of the same tradeoff, accepting a cost in single-task performance to gain flexibility at a thin interface. A model pretrained on self-supervised objectives and fine-tuned for a downstream task is already a securitised architecture in these terms: the pretrained representation is the asset pool, the fine-tuned head is a tranche, and the interface between them supports additional tranches on the same foundation. This convergence is not a coincidence. The empirical discovery that general representations transfer better than specialised ones, across a wide enough range of downstream tasks, reflects the same dynamic that the agentic turn exerts on software more broadly. When the ecosystem rewards composability, the architectures that survive are those that separate what is expensive and general from what is cheap and specific. In machine learning this principle has already been absorbed into standard practice. In software at large it is still arriving.
The value of the securitisation frame is that it gives you something actionable, not merely something to say (this tech wave does seem to invite a certain degree of cheap spectacle and brand positioning). Given a system you can ask specific questions of it: what is unsecuritised, which capabilities are fused that might be worth separating? Where are the thin interfaces, at which boundaries could you decompose without destroying what makes things work? What contract would a consumer of this capability need, and can it be made thin enough to justify the overhead? Not everything benefits from decomposition, and not every interface is thin enough to support it.
The analytical exercise is itself clarifying, and parts of the industry seem to already have performed it (or otherwise arrived at the same conclusion). To me, this helps understand the motivation for Pydantic to evolve from a data validation library into an agent framework by recognising that its core affordance, the structured transformation of untyped input into typed output, is precisely the kind of capability that orchestration layers want to invoke independently of any particular application context.
Much remains in the unsecuritised state, and if agents are indeed here to stay then economic pressure to decompose will soon drive more software into this shape.
- Will some interfaces prove 'resistant' to the dissolution?
- Will we mourn the destruction of products like many Photoshop users did?
- Will this be an empowering transition, and/or will it lead to demand destruction?
- Will we get a "ratings agency" layer, or does this decentralisation give an insecure wild west of unregulated 'securities'?
- For ML models, does the value accrue to the representation (expensive, defensible) or the heads (cheap, fast, closest to the user)?
- Do users want this configurability enough to reshape markets (the price discovery aspect), or is this a transition that only developers notice?
- Can representations be public goods? If the expensive general-purpose layer is where investment goes, is there an argument for open-sourcing it and competing only on heads?
- How would you measure whether this is actually happening, as opposed to being a compelling narrative? What is the observable metric that distinguishes genuine securitisation of affordances from ordinary API proliferation?
- Is there a selection effect in which affordances get securitised? If only the easy-to-isolate capabilities get decomposed and the hard ones stay bundled, the resulting market may systematically overrepresent the trivial and underrepresent the valuable.
- If the protocol waist commoditises, does the hourglass collapse into a funnel? That is, does the orchestration layer eat the execution layer once it has enough data about which compositions work? This is of course what many suspect the frontier model developers to do.