Interview: From "Pointed Solution" to Process: The Real Barrier to Scaling AI
Your company may be one of the many that are great at identifying AI use cases and running POC’s. But what next? A huge gap exists between a successful POC and a solution that is successfully scaled across the enterprise. The early promise of small-scale pilot projects can often die on the vine, failing to integrate into the larger operational workflow.
As a leader in the highly-regulated energy industry, Shyam Perugupalli, Chief Information Officer at Strata Clean Energy, has navigated this exact challenge. His focus is on the practical, structural, and governance-based steps required to make AI sustainable, so that it can move beyond the initial "wish list" to create real business value. Isobel Singh asked him about the hurdles AI must jump to become a part of the business process journey.
Isobel Singh, Event Director: What are the most persistent barriers you’ve faced when scaling AI from proof of concept to enterprise-wide deployment?
Shyam Perugupalli: The biggest issue I’ve seen is integrating the AI solution into the broader business. POCs are often "pointed solutions" designed to solve one specific problem. But to scale, you must have a conduit to plug it into the overall business process. This is what forces you to step back and do real business process re-engineering. This means that you must manage the change to the process itself, rather than just treating AI as a standalone solution. This is what makes scaling hard.
Isobel Singh, Event Director: You mentioned the need to "manage the change to the process." Can you give an example? S
hyam Perugupalli: We are implementing an AI solution that uses drone footage to check if a construction site is on schedule and adhering to design standards. The tool gives us the "on the ground reality," but that's not the end. We need a closed-loop solution to integrate that feedback right into our planning processes, adjust construction workflows, and identify root causes. It’s not just about getting the output from the model; it’s about making sure that feedback is integrated with the rest of your operational processes.
Isobel Singh, Event Director: How do you decide which AI initiatives are worth scaling? S
hyam Perugupalli: We have an open process where anyone in the organization can propose an AI use case. After a successful pilot, our governance process uses two main criteria to decide on scaling:
First, feasibility: do we have the right data and infrastructure? And, just as importantly, is the business willing and able to maintain and sustain the solution going forward?
Second is business benefit: we are not looking to eliminate resources. That was never our objective. But we are asking whether the solution will meaningfully improve efficiency, strengthen our operations, or create opportunities for innovation and customer value.
Isobel Singh, Event Director: What does a resilient AI governance model look like in your organization?
Shyam Perugupalli: We consciously decided to have all AI investments under a single umbrella, in our case, the CIO's office. Our governance model focuses on evaluation, development standards and robust vendor assessment. First we evaluate, using the feasibility and business benefit criteria to decide what moves forward.
Development standards provide clear guidelines for in-house builds including which models and libraries can be used and what security requirements must be met.
Finally, the AI ecosystem has thousands of tools, many of which are startups. Given that we're in a highly regulated industry, we must apply robust vendor assessment and be extremely careful with data privacy and security.
Isobel Singh, Event Director: How do your vendor partnerships accelerate or hinder scalability?
Shyam Perugupalli: We have a clear "build vs. buy" strategy. If it's a core business process where we offer differentiation, we build it in-house (we're a Microsoft shop). If it's a commodity activity, we procure it from a third party. The risk is that many vendors are small and might not be viable long-term.
To mitigate this, we might use a small, low-cost vendor during the POC stage. But once the use case is validated for scaling, we will look for a mainstream vendor or build it ourselves to ensure long term stability and support.
Isobel Singh, Event Director: What metrics do you rely on to measure success and Total Cost of Ownership (TCO)?
Shyam Perugupalli: For most new initiatives that we scale, we are not actively measuring the returns just yet; we prefer to let them run for at least a year so we can observe meaningful, tangible benefits. The exception is Copilot, which we’ve rolled out across the organization.
It provides out-of-the-box metrics like "assisted hours," which is a good proxy for cost efficiencies and productivity. For other scaled pilots, we will eventually measure them against the initial ROI metrics we used for approval, but we’re giving them time to prove their value.
Join the Conversation in Houston!
Shyam’s insights on governance and business process re-engineering prioritisation are just the starting point for the operator-led discussions you'll find at the AI in Energy Summit.
Join a curated group of 250+ senior operations, digital, data, and AI leaders from utilities, oil and gas, power generation and renewables to learn what’s actually working in GenAI, ML Ops, and digital transformation. Hear directly from those who’ve taken AI from proof of concept to full deployment.
Register your place today and ensure your AI strategy is built for the future.