Post-SaaS: how AI agents are transforming software
As a partner in data, AI & agentic at Converteo, Julien Ribourt helps organizations define and implement their strategy in the age of data and AI. An expert in agentic transformations and data ecosystems, he develops tailored approaches to make artificial intelligence a driver of performance and innovation.
Key takeaways
- In the agentic era, the traditional SaaS model (database, business logic, interface) is breaking down. Business logic is migrating to AI agents, commoditizing the application layer and making the structured data layer the company’s true strategic asset.
- This transformation requires a “queryable” data layer: not just a data lake, but a set of structured, governed data accessible via stable and versioned interface contracts (APIs, declared schemas) that agents can reliably exploit.
- The user interface is becoming ephemeral and generated on-demand by agents, leading to an “everything as code” paradigm. The cost is no longer in the per-user license (“seat”) but in the actual usage (“compute”), thus aligning spending with the value produced.
Is the application dead? Long live code!
Much has already been written about the massive correction in software values in early 2026 and the threat that AI agents pose to the SaaS model. This debate is widely covered. What seems less explored to me is the architectural interpretation: not “what will disappear?” but “what is emerging in its place, and how is it built?”
The decomposition of the SaaS application
A classic SaaS application is a triptych: database, business logic, user interface. In may 2025, Satya Nadella described this triptych: “SaaS applications are essentially CRUD databases with business logic. The business logic will migrate to agents.” McKinsey formalized the idea in a “post-SaaS” archetype where the agent directly queries data repositories via API, commoditizing the application layer.
But what neither of them details is what this requires from the data layer. Yet, that is where the real game is played.
Queryable data: neither a data lake nor an interface contract
When we say “agents query data via API,” it’s not about pointing an LLM at a raw data lake. An agent needs structured, semantically typed data, accessible via a stable interface contract: declared and versioned schemas (JSON Schema, Protobuf), documented APIs with an OpenAPI contract, metadata layers, and governance of freshness and permissions.
The Klarna case illustrates this logic. In september 2024, Siemiatkowski announced the shutdown of Salesforce and Workday and the consolidation of 1,200 SaaS applications. The media narrative speaks of “replacing SaaS with AI.” The reality is more instructive.
Klarna extracted customer, transaction, and product data that lived in Salesforce to consolidate it into a Neo4j graph database, made this layer queryable by its internal AI, and then generated new interfaces on demand with Cursor. The result was the elimination of hundreds of SaaS licenses and reduced dependency on proprietary interfaces that, ultimately, were just graphical overlays on data Klarna already owned.
This logic, however, primarily targets SaaS whose value is reduced to an interface on data the company owns, not platforms with high architectural criticality (ERP, transactional systems, PLM) whose proposition is based on scalability, infrastructure reliability, and deeply integrated business logic. The challenge is not to replace everything, but to identify the tools whose actual end-user usage no longer justifies the license cost.
The important point is that this pattern does not depend on Neo4j or Cursor. It is reproducible as long as three conditions are met:
- data consolidated in a structured and governed layer (whether it’s a graph database, a data warehouse, or a set of microservices);
- a standardized access surface that agents can query (REST API, SQL, or an MCP protocol that standardizes the connection between LLMs and data sources);
- and the ability to generate interfaces on the fly rather than licensing them.
The ephemeral interface and “everything as code”
If the agent can generate the interface, then the interface no longer needs to be a permanent product. Karpathy describes having vibe-coded ephemeral applications to find a single bug. Code is suddenly free, ephemeral, and disposable after a single use. If the marginal cost of code tends to zero, the application becomes an artifact generated on the fly—a .jsx file, a Streamlit dashboard, an interactive markdown—executed and then discarded.
This movement converges with the previous one towards a principle that I propose to read as the extension of infrastructure as code to the entire application stack: a low-level, structured, governed data layer, exposed via versioned APIs; an orchestration layer of agents; and an ephemeral rendering layer, generated on demand. Everything is declarative and versionable. The cost shifts from the “seat” to the “compute,” which is more efficient and more aligned with the value produced.
What impact for you?
Consequently, the structured and governed data layer becomes the real asset. If your data is scattered across multiple SaaS with their proprietary schemas and flaky CSV exports, you are dependent on interfaces that agents will make superfluous. On the other hand, if your data is consolidated and accessible via stable contracts, you can connect any agent to it, today or tomorrow.
Of course, building this data layer remains a heavy task: agents are probabilistic, the deterministic layer remains essential, and Sinofsky is right to remind us that each tech wave has multiplied the demand for software. There will be more software than ever, but it will be more ephemeral, more modular, and anchored in more solid data foundations. Welcome to the era of “everything as code.”