There’s no shortage of hype around legal AI. Depending on who you ask, it’s either going to revolutionize legal work or it’s a glorified chatbot. The reality, as always, is more complicated.
Here’s the paradox we live every day at Factor: the technology is powerful, getting more reliable, and capable of genuinely astonishing things. And yet, across the industry, most deployments still feel stuck in first gear. Pilots stall. Adoption lags. Results vary.
So what’s really going on?
It turns out the blockers are not where most people think.
Let’s be clear: the models are capable. When designed and implemented well, AI systems can handle highly nuanced legal tasks—from cross-referencing contracts and playbooks to reasoning through complex clause interdependencies. As our R&D team has shown, these systems can achieve 90%+ accuracy and process tens of thousands of agreements in days, not months.
But raw capability doesn’t guarantee usable outcomes. Even the best AI requires orchestration—a combination of legal context, workflow design, UX, and continuous feedback. Without that, it's just potential, not performance.
At Factor, our mission is to transform legal contracting at scale. That’s why we’re embedding AI into our workflows—automating contract reviews, spotting deviations from preferred positions, and recommending redlines. We’re currently running 17 live pilots across multiple clients.
This isn’t just a story of tools or internal learning. It’s a story about how we’re helping clients safely and effectively integrate generative AI into their legal workflows — learning what works and what doesn’t, so we can drive smarter, faster, and more reliable legal outcomes.
We are making rapid progress. But pilots can still stall when stakeholder mindsets or governance structures have not yet caught up. That is where the friction lies.
Here’s the hard truth: AI pilots in legal are messy.
They’re political. One strong opinion can derail momentum. They’re imperfect. Outputs need review and context. They’re uncertain. Not all success is measurable right away.
We’ve had to unlearn the idea that AI adoption is clean, linear, or perfectly planned. It isn’t.
Legal professionals are trained to reduce ambiguity. AI workflows introduce it. They evolve, iterate, and improve over time. That requires not just different tools, but different thinking.
We’ve seen pilots stalled because:
• Senior stakeholders judged a prototype on aesthetics over substance
• Reviewers expected omniscience without providing context
• Compliance policies were written for a pre-AI world
In every case, the challenge wasn’t capability. It was mindset, governance, and trust.
Success isn’t just about AI performance. It’s about changing how people work and think. That’s why every pilot is a chance to transform while doing.
Where we’ve seen real traction, it comes down to four things:
Of course, building for orchestration isn’t simple. It requires sophisticated infrastructure, engineering, and evaluation capability—something most legal departments don’t have on their own. That’s why Factor’s R&D team, working hand-in-hand with our legal delivery experts, is central to making these systems reliable, explainable, and production-ready.
The biggest differentiator? We don’t run AI pilots in isolation. We bring together lawyers, delivery leads, AI specialists, and client stakeholders—coordinating change from multiple angles. This orchestration is what allows pilots to become real, scalable solutions.
This is where Factor excels. Our proprietary suite of AI tools, known collectively as FactorAI, powers solutions like Mosaic, our contract data extraction tool, and Negotiator, an AI co-worker that supports heavy lifting in contract preparation and analysis.
These tools are deeply embedded into our managed contracting ecosystem and shaped through real-world legal work, continuous user feedback, and collaborative design across legal, operational, and technical teams. It’s what we mean when we talk about legal-first AI—technology designed not just to assist lawyers, but to fit the way legal work actually happens.
Legal AI adoption is not being held back by technology. It’s being held back by legacy expectations that tools should be perfect, polished, and linear.
But innovation rarely follows a straight line.
We’re building the plane while flying it. And that’s OK.
When we get it right—when the models are orchestrated, the mindset is flexible, and the work is collaborative—the results speak for themselves. Faster contracting. Better risk visibility and portfolio-level intelligence. Trusted outputs at scale.
For example, after embedding AI in one client’s contracting process, we were able to reduce the time taken for contract reviews by 40%, while improving compliance accuracy by 25%. These are the kinds of results that show the potential of AI when properly orchestrated.
Every pilot is a building block in a bigger mission: to reshape how legal work gets done. These aren’t side experiments—they’re foundational steps toward a modern legal department where AI, people, and process work in concert to deliver business value.
That’s what we’re building. One evolving pilot at a time.