Ensuring big data doesn’t mean big confusion in the asset swampAdd bookmark
The fact is, most companies just don’t know what to do with the majority of the data they’re collecting. So it just sits there.
Businesses need to step back and rethink whether all this data needs to be collected in the first place. It is simply a waste of resources to gather and store data that serves no purpose.
In the not so distant past, most enterprise applications were based on a range of data that was heavily-constrained by what was collectable.
It was hard to acquire data and ensure it was of high quality – and it still is for some types of data. Now in the modern world, with the Internet of Things and the ability to collect data from a variety of different sources, automatically in many cases, you can easily end up with overwhelming amounts of data.
Big data can often result in big confusion.
Apportioning worth to data
Some of the data has a natural home, such as asset registers and technical records. Some can be distilled, analysed and converted into useful management information.
That said, a large amount of it falls under the category of “it might be useful one day” – a large, often unstructured mix of activity records, asset performance and condition attributes, sometimes having localised or temporary usage but often collected just because it is now easily-collectable.
"Businesses need to step back and rethink whether all this data needs to be collected in the first place. It is simply a waste of resources to gather and store data that serves no purpose."
Therefore, the real challenges are to understand what data is worth collecting in the first place, and why. Then we have to put it into organised repositories that are more like a library and less like a swamp.
Here are some ideas on how you make sure that you can store more relevant data, with clearer understanding of why it is needed, without it becoming a messy liability that is neither used nor trusted.
The demand-driven supply chain
Think of data as part of a demand-driven supply chain in which justification for collection, retention and usage has to be made from the business risk or cost of not having it to the appropriate standard at the right time.
The apparently low cost of acquiring data and the motive that “it might prove useful” are not enough to justify collection and retention. This bucks the trend of data provision being seen as an availability-driven process that triggers a search of usages.
Demand-driven thinking requires greater understanding of how the data will be used, selective extractions from it, and what business value is achieved from using it.
The SALVO Project, a multi-industry R&D programme to develop innovative approaches to asset management decision-making, has yielded good examples of this approach.
Four of the necessary six steps in the SALVO decision-making process illuminate the demand-driven data specification. In our next article, we will go through those steps in detail.