Aravind Parthasarathy, Head of Technology at NewRocket, argues that enterprise software is entering a phase where traditional SaaS metrics no longer reflect real value creation. As AI systems move from assistive tools to autonomous agents, he says organizations must rethink how they measure success, structure work, and govern systems.
At the center of his argument is a shift in assumption: software is no longer just used by humans—it increasingly performs work independently.
Why traditional SaaS metrics break down
Parthasarathy challenges the foundation of conventional software measurement:
“SaaS metrics were built on a simple assumption: value scales with human usage. In other words, more seats, more logins, more feature clicks were treated as proxies for increased value.”
This logic holds when humans are the primary operators. More usage typically means more productivity. But that model depends on software acting as a tool, not an autonomous system.
That assumption changes with agentic AI.
“Agentic AI breaks that assumption because the goal is not higher user-engagement. We are actually driving towards less human involvement.”
In this context, high engagement can even signal inefficiency. If humans are constantly interacting with the system, it may not be operating autonomously.
“With agentic systems, the agent is the operator. You’re delegating work to software, not using software to do the work yourself.”
This redefines the human role from execution to oversight. As a result, traditional metrics like usage and adoption lose relevance.
“The core metric becomes autonomous outcome completion: how often does the agent deliver the business result end-to-end, how quickly, and with what escalation rate and quality?”
From products to “Minimum Viable Functions”
To operationalize this shift, Parthasarathy introduces the idea of the Minimum Viable Function (MVF).
“A Minimum Viable Function (MVF) is the smallest business outcome you can delegate to an agent end-to-end, from trigger to result, with the right guardrails.”
Unlike traditional product thinking, which automates parts of a workflow, an MVF assigns responsibility for the entire outcome.
“It’s not “automating a step”; it’s assigning a mandate.”
In practice, this means redesigning processes so that agents handle full business flows. For example, in billing resolution, an agent can diagnose issues, apply corrections, communicate outcomes, and escalate only exceptions.
The result is not incremental efficiency but structural change, reducing multi-step human workflows into autonomous execution chains.
Why AI pilots fail to scale
Despite rapid experimentation, many organizations struggle to move AI beyond pilots.
“Most pilots don’t stall because the model can’t reason—they stall because the organization doesn’t change around the agent.”
The limitation is not intelligence, but integration into real systems with real constraints.
“The sandbox-to-production gap: The agent can be capable, but the surrounding system becomes the bottleneck.”
Production environments introduce dependencies, permissions, compliance, fragmented systems, that slow or block autonomy. Without addressing these, agents plateau regardless of capability.
Governance shifts from review cycles to live systems
Parthasarathy argues that agentic systems require a fundamentally different governance model.
“Traditional governance is event-driven: quarterly committees debating ROI after the fact.”
Instead, autonomous systems require continuous operational monitoring:
“Autonomous outcome completion creates weekly operational visibility across three signals: Completion rate, Learning velocity, Exception patterns.”
This makes governance more diagnostic and responsive, focusing on why systems fail rather than whether they are “working” in aggregate.
“Risk shifts from “avoid failure” to “detect, contain, and correct failure faster than it propagates.””
Rethinking organizational design
If agents perform execution, organizations must be structured around enabling and supervising them.
“Leaders need to treat agents like a new kind of workforce: you don’t just buy software, you stand up a function with accountability, guardrails, and continuous improvement.”
This introduces new roles:
“Agentic Mentor: Owns autonomous outcome for one or more MVFs in the same domain.”
“Capability Engineer: Manages APIs, data access, and tool integrations agents need.”
“Governance Lead: Sets and audits guardrails, validates agent decisions weekly.”
Accountability shifts from headcount to system performance:
“Accountability is to the function’s autonomous capacity and the learning system’s velocity, not tied to headcount.”
The broader shift
Across Parthasarathy’s framework, the transformation is consistent: enterprise value is moving away from human activity and toward system autonomy.
Software success is no longer measured by how much people use it, but by how independently it completes work and how quickly it improves while doing so.
