AI in Practice: What Works, What Doesn’t & What’s Next?

“The biggest mistake is believing that AI is a silver bullet or magic.”
AI promises revolutionary change, but are businesses ready for the reality? Companies are rushing to deploy new AI tools for consistent, error-free operations at speed, often before fully grasping the inherent challenges. Leaders are hearing about exponential leaps in automation, but do they need a reality check on what's truly achievable right now?
Ahead of the upcoming Operational Excellence in Oil & Gas Summit, Jonathan Alexander, Global AI and Analytics Lead for Manufacturing Excellence at Albemarle, spoke to Isobel Singh, Event Director about the profound impact of artificial intelligence on industry, common pitfalls in implementation, and his vision for the future of AI in manufacturing.
Isobel Singh, Event Director: Jonathan, could you tell us a bit more about your role?
Jonathan: My role involves leading our global AI and analytics efforts within Manufacturing Excellence. Our core job is to leverage modern technology, like AI and advanced analytics, to empower our operators and engineers who run our chemical plants. We aim to help them improve yields, enhance quality, boost equipment reliability, and increase overall productivity at our manufacturing sites. Essentially, our global team provides internal consulting, enabling them with the necessary technology, tools, training, coaching, and advice.
Isobel Singh, Event Director: When it comes to scaling AI projects, is there a standardized framework, or does it depend on the type of AI being implemented? Jonathan Alexander:
If anyone claims there's one universal way to implement AI, they're misleading you. AI is an incredibly broad category. When we consider applying AI, there are three main categories of machine learning: unsupervised, supervised, and reinforcement learning. Generative AI, like ChatGPT or Grok, is just one small, albeit currently prominent, subset of the overall AI landscape.
Unsupervised machine learning excels at finding patterns, outliers, and anomalies in large datasets without any prior context (it doesn't know if the data is "good" or "bad"). This is very useful for anomaly detection in manufacturing. Supervised machine learning is used for making predictions, where the model is trained on labeled data (e.g., classifying images as cats or dogs). Reinforcement learning involves an AI learning through trial and error within a defined rule set, much like an AI playing chess or Go. These are all part of traditional machine learning, and Generative AI, which creates new information, is a distinct category.
Isobel Singh, Event Director: What would you say are the main mistakes operators make when scaling AI initiatives enterprise-wide?
Jonathan Alexander: The biggest mistake is believing that AI is a silver bullet or magic. It simply doesn't work that way. A primary pitfall is data quality – the old adage "garbage in is garbage out" is very true in this space. While modern Generative AI can sometimes produce interesting results even with some imperfect data, truly incredible outcomes require high-quality data. Businesses need to start treating their data as a strategic asset, not merely a byproduct. Better data quality, contextualization, and governance will unlock opportunities for years to come.
Another common mistake is misapplying AI technology. You need to understand what an AI methodology or algorithm is genuinely good at, and what it's not. For example, trying to use a large language model like ChatGPT for highly advanced machine learning tasks, such as neural network development, isn't its optimal use and will likely lead to disappointment. Having a clear line of sight on the right way to apply AI is crucial.
Isobel Singh, Event Director: You touched on applying AI to the right use cases. Do you believe AI, regardless of its type, will always need a holistic human viewpoint when implemented?
Jonathan Alexander: I firmly believe the right approach is to always have humans in front of AI, not behind it. While the distant future is uncertain, we're not at a point where robots are omnipresent, nor do I foresee that happening anytime soon.
AI tools should primarily support and augment human capabilities, automating manual tasks and accelerating information ingestion. Think about how much faster research is today compared to when we had to rely solely on libraries and physical scanning. While AI can handle basic decisions, complex decision-making requires human elements like contextual clues, understanding body language, or emotional intelligence – something AI simply doesn't possess yet. The core benefit of AI lies in enabling people to reduce manual tasks, automate information processing, and even assist with content generation, like making writing easier for someone who struggled with it.
Isobel Singh, Event Director: You mentioned "junk in, junk out" regarding data. How important are data governance and cybersecurity, especially with the rapid implementation of AI tools?
Jonathan Alexander: Data quality has been preached for decades. The challenge is that a standalone data quality project can be a tough sell. What we're seeing now is that companies often start an AI initiative, and then, as they progress, they quickly realize they have significant data quality issues. They're forced to fix that "garbage" before they can get meaningful results. So, while data quality and contextualization are hugely important, sometimes you have to just get started and learn along the way. However, going in with the right expectations about data's foundational role is critical.
Isobel Singh, Event Director: AI tools are rapidly maturing. What role do you foresee them playing in shaping the future of major industries like manufacturing?
Jonathan Alexander: I believe we're in the largest industrial revolution since the internet, and before that, the computer, electricity, and even steam turbines. It's truly massive what's happening now.
However, regarding the future of AI in manufacturing, I don't foresee fully automated, 100% human-free plants for the next 100 years. Consider the complexity of an oil and gas refinery – it's arguably 10,000 to 100,000 times more complex than an automated vehicle, and we're not even all driving fully automated vehicles yet. The manufacturing sector has vast areas without sensors for measurement, relying on visual or auditory human input. Automating everything across massive units would require immense investment in new instrumentation and automation layers. Unless there's an astronomical increase in profit margins – for instance, oil prices increasing tenfold – it's simply not economically viable to replace every single piece of existing equipment globally to achieve full automation.
Instead, you'll see a wealth of AI-enabled opportunities. Anything digital, already on a computer, is fair game for AI enhancement. But for many physical tasks, while the automation technology may exist, the economics of adding such layers to existing massive assets rarely provide a reasonable return on investment. So, true, complete overhauls of our global asset base due to AI are a century away, requiring fundamental economic shifts to support such investment.
Join us on November 5 in Houston at the Operational Excellence in Oil & Gas Summit as Jonathan delivers a case study on "Breaking Free from Pilot Purgatory: Realizing Economies of Scale with AI, ML, & Operational Excellence." Download the Event Guide to learn more.