Interview: Overcoming Barriers to Accelerate AI in Energy with Murphy Oil
The energy sector is aggressively pursuing AI, with countless “science projects” and proofs of concept (POCs) running in parallel. Yet, a staggering number of these pilots fail to scale into valuable, enterprise-wide solutions. The hype is colliding with the hard realities of data uniformity, risk management, and a technology that evolves so fast, internal builds can become obsolete before they even launch.
Is there any gap for oil and gas companies to succeed between the twin threats of falling behind and failed AI implementations?
We spoke with Adam Pryor, Manager - Strategic Analytics at Murphy Oil, to understand the persistent barriers to scaling AI and how to navigate the new, fast-paced landscape of technical debt and vendor partnerships.
Isobel Singh, Event Director: What are the most persistent barriers you’ve faced when scaling AI from a proof of concept to enterprise-wide deployment?
Adam Pryor: I see two major aspects. The first is data availability and uniformity. You might pick a data-rich POC, but as you scale across the organization, you discover differences in process and data collection that create challenges. At the end of the day, it's garbage in, garbage out.
The second barrier is the pace of change in the technology itself. The speed at which third-party tools develop is often much faster than you can build internally. We’ve had instances where we develop and deploy something, and there’s already a better tool on the market. You have to ask, "What is the development cycle?" If it's longer than eight weeks, you might want to wait and see if the technology catches up. You don't want to develop that technical debt.
Isobel Singh, Event Director: How do you decide which AI initiatives are worth scaling, and how do you ensure they stay aligned with evolving business priorities?
Adam Pryor: This is the age-old question for any technology. There are a lot of people who are technology-first, not problem-first. The real question is: is AI providing a differentiated value that justifies its use? During your MVP or pilot, you must be very clear on the business objective and the quantifiable value you expect. At the end of that pilot, you have to honestly assess if it met that goal. If the value is there, it's no different than any other technology, in that you scale it when you see the value has been met.
Isobel Singh, Event Director: What does a resilient AI governance model look like, and how do you balance innovation with risk management at scale?
Adam Pryor: I think this is the question everyone is asking. Right now, it's an argument between cybersecurity and innovation, and frankly, cybersecurity is winning. The fear and risk associated with that side tend to be larger than the value I can demonstrate with my AI use cases so far. We're moving out of the “science project” phase and into agent frameworks using tools like LangChain or CrewAI. As the toolset matures and isn't just custom-built, it becomes easier for InfoSec to assess and understand the risks. As that starts to settle out, I think you're going to see a much healthier relationship between the two sides.
Isobel Singh, Event Director: How do your vendor and ecosystem partnerships accelerate or hinder AI scalability in the energy space?
Adam Pryor: I'm of the mind that very few of us, short of the supermajors, have the internal capacity to develop this in-house. The partners you choose will make or break your AI program. This is tough because many of the most innovative players are smaller startups, and historically, the energy space likes big, established names on the invoices. It's a matter of choosing how fast you want to move.
The underrated partnership, I would say, is with the technology piece of your audit firms. They need to be able to understand and interpret the solutions you're building, because the auditability piece of those firms is going to be the next big question.
Isobel Singh, Event Director: Finally, what metrics do you use to measure success and Total Cost of Ownership (TCO) when scaling AI?
Adam Pryor: For the measure of success, it’s no different than any other IT project: What value can I provide and what hard dollar takeout can I get?
Total Cost of Ownership is the hard one. I’ve asked this at many conferences, and the answer I get is: you're not going to be able to balance your technical debt right now. Historically, tech debt was a portion of a system. In the AI world, your tech debt could be the whole thing you just implemented that you have to rip out and replace next year.
The technology is moving at such a pace that your TCO could be in months versus years. I don't have a good answer on TCO, because you just have to be willing to spend the money and play the game.
Join the conversation in Houston, 2026!
Hear more from a curated group of 250+ senior operations, digital, data, and AI leaders from utilities, oil and gas, power generation and renewables to learn lessons from the cutting edge of successful GenAI, ML Ops and digital transformation initiatives.
Register your place at AI in Energy Summit today and ensure your AI strategy is built for the future.