
Why the real competitive advantage in AI is shifting from execution to judgment
Building has become easier. Deciding what to build, how to structure it, and how to make it reliable is where the real work now sits.
In this piece, Hitesh Talwar, Global Head of R&D at Factor, reflects on what that shift looks like from inside the work of building and deploying AI systems in legal.
The discourse around AI is often too blunt to be useful. People tend to fall into one of two camps: either they dismiss the whole thing as brittle slop, or they talk as if you can point a model at a problem and get a real product on the other side. What is happening is more nuanced, and much more interesting.
LLMs make it easier to build software and to produce many forms of knowledge work. This creates real asymmetries. Small teams can move faster than large ones, while incumbents shaped by older ways of working can suddenly find themselves at a disadvantage. In legal tech, entire categories of products that once felt substantial are now easy to build: a Word add-in, an LLM loop behind an email address, a basic redlining workflow.
Many people still treat the model as the product. That is a mistake. The model is a component. The product is the system around it: the harness, the boundaries, the validations, the interfaces, the audit trail, and the decisions about what must remain deterministic. The advantage is not simply access to models; it is the concentration of taste, judgment, and decision-making. Knowing what to build with AI, and how to build it properly, is not evenly distributed.
The same is true of data. Existing data is not inherently valuable just because it is proprietary or abundant. Its value depends on whether your systems can extract value from it. If your agent design is weak, your assumptions shallow, or if your product cannot reliably turn raw information into useful outcomes, then the data itself does not help. Data can be an advantage, but only when paired with the judgment and system design needed to make it usable.
Incumbents and large organizations are often structurally bad at preserving that kind of judgment. As they grow, coordination costs rise. Decisions get safer. Products get flatter. Everything drifts toward consensus and away from sharpness. Over time, you start building the obvious thing, or the requested thing, rather than the thing that should exist.
That was less of a problem when scale itself conferred outsized advantage. But the equation is shifting, because the scarce layer now is not raw production capacity. It is agency, taste, and judgment — and those are not evenly distributed. The hard part has moved.
Drafting, synthesis, analysis, formatting, and transformation are becoming widely available. More raw execution will be handled by models and agents. What remains scarce is knowing what should be built, what should be checked, what should be constrained, and what standards the result must meet. The differentiator is no longer just producing the thing. It is shaping the right outcome.
That shift has an uncomfortable implication. People are attached to the things they produce themselves. In software, that attachment is often to authorship of the code. In knowledge work, it is often to sole authorship of the draft, the memo, the analysis, the deliverable. Understandably, we want the value of our work to reside in the fact that we personally made it — but that is becoming a weaker claim on value than it used to be.
Expertise is no longer defined by having written every line or every paragraph yourself. It lies in the judgment to shape the right result: deciding what should exist, what form it should take, what standards it must meet, and how to steer the process toward that outcome. That is where agency starts to matter more, not less. Agency is not just speed, nor mere willingness to act. It is the capacity to see what should be done, make the necessary trade-offs, and use tools to drive toward an outcome instead of passively accepting stasis.
That is also where taste comes in — and not in any narrow aesthetic sense. Taste is not ornament or polish. It lives in foundational choices: how you frame the problem, how you structure a system, which trade-offs you accept, and what quality bar you are willing to enforce. It determines what stays open-ended, what becomes deterministic, what gets simplified, and what gets left out. In short, what good looks like.
Across Factor over the last two to three years, this has meant a willingness to move faster without becoming sloppier, and to keep rethinking the shape of the work as the tools improve. It has meant revisiting architectures when better model capability allows for better system design, replacing things that once worked well enough when they no longer meet the bar, and refusing to let old decisions harden into doctrine simply because they once made sense.
The models will continue to improve — faster than we think, and more slowly than we would like. But the benefits will not map neatly to size, prestige, or historical strength. They will map to clarity of thought, speed of adaptation, and willingness to let go. Most people will have access to the tools. The edge will come from rethinking the work itself around what the tools make possible.
That requires humility. It requires admitting that the old ways may no longer be the right ones. It requires being willing to discard what you built yesterday if a better structure becomes possible today. It requires taking pride in the outcome more than in your authorship.
All of this starts with AI fluency as a baseline, but it also requires updating priors and shedding old ways of working. That is difficult for both people and organizations. But it is a reality worth grappling with.
If these questions are live in your team right now, get in touch.