Loading...

Master data management the easy way

  • Posted on November 21, 2016

Data Management

I met with a client that needed to deal with one of the most common and complex problems that organizations have – data management.  This oil/gas company had data coming out of its ears – production, seismic, HSE, down-hole sensor, log, financial… you get the picture.  Each department was its own silo; a sprawling archipelago of disconnected data islands. Moving data from departments was painful and clunky, like throwing stuff on a boat and hoping whoever receives it finds something useful Twitter [CLICK TO TWEET].

I used two or three analogies to explain a problem; my apologies.  Data management isn’t really analogous to anything, given its size, scope, complexity.  Data management is like data management, and that’s it.  The problem isn’t data management as much as it is data integration; how to tie disparate pieces of data together in order to construct insights.  That’s the problem – the language barrier.

Some companies have the discipline to institute top down master data management (MDM) programs that essentialize the data into relevant quanta and construct a standard format for data to be shared across the enterprise.  Others use tools like ETL (for tight coupling) or mediated schemas (for loose coupling) in order to overcome this problem, but you are stuck in rigid one-to-one database relationships and/or orphaned data.  Still other try to do it the “quick and easy but really isn’t” way with Informatica, Tibco, or InfoSphere. Sure, you get the enterprise bus and a data catalogue and promises of universal connectivity… but your corporate data governance, internal data models and processes, and organizational fluidity fall to pieces. Oh, and setting it up sucks.

Every company has its own way of managing data, but commonly you see multiple data warehouses for the same system but for different end users. I realized that overcoming organizational inertia, political realities, departmental lines, and resistance to change in order to create and implement a top down master data model would be impossible.  Ultimately, understanding the relevancy of data specific to arcane technical disciplines was too complex an undertaking.  Understanding the systems, applications, transmittal and ingestion technology, and various data management reference architectures would take years.  Even if we could aggregate and organize this data in a master data warehouse, by the time we were done it would have already changed.

So we pitched a modified master data management strategy:

  • Bottom-Up Data Models: The analysts that are creating, transforming, or ingesting the information on a regular basis know the data best. They will know more about the characteristics, value, velocity, accuracy, and nuances associated with the data than any enterprise architect brought in to define a data model. The goal is a universal mediated schema that allows any-to-one data integration.
  • Enterprise Service Bus: This is where workflow and business processes are instantiated. It also serves as a master communication hub for and between the various nodes connected to it. An ESB can carry event-driven messages that have the capability to kicking off real-time updates to both data and applications.
  • SOA-enabled Data Connectors: Connectors are universal and have only two functions, ingesting and sending. The adapters transform the data to adhere to a data model that is hierarchically defined - like a master data model that everyone has to map to, and as you go down the food chain, it gets more complex and detailed.

It’s important to note that most companies need master data management.  It is the only way to ensure data consistency and it simplifies business analytics.  There are many ways to make this happen.  A path forward is predicated upon the structure, complexity, and volume of your current data sets, and success is not some imagined goal post but rather the slow and orderly integration of data and applications.  Ultimately, what you want is for your company to leverage that data in order to be better informed for decision support, which will lead to higher profitability.  Right?

There is an easier way.

Avanade has put together an end-to-end analytics platform that manages everything from ingestion to AI.  AMAP (Avanade Modern Analytics Platform) is a true platform-based approach to big data; Avanade has taken core open-source components of Azure and strung them together as a platform that can accommodate 80% of complex workloads with intervention required only for data cleansing, ingestion, and potentially custom algorithmic development. Yes, data normalization and cleansing are still relevant, but at least you’re not worried about the enterprise service bus, SOA adaptors, or even stringing disparate pieces of the big data ecosystem together. The locus of investment shifts from technology design and deployment to garnering insights from the data.

The future of analytics and decision support in the E&P space is a difficult one; real-time analysis of data for decision support can have consequences that run into the millions of dollars a day.  Solutions like Avanade AMAP look at the data problem from a client perspective – how to drive value and decrease NPT on a subscription basis rather than having to invest large sums of time and money into a big data infrastructure development process.  Ultimately, I see a world where Big Data is an integral part of every operation, but AMAP serves as not just a bridge between today’s practice and tomorrow’s promise, but as a guide to those visionaries with the mettle to bind analytics and economics together.

Avanade Insights Newsletter

Stay up to date with our latest news.

Next steps

Talk to us about how we can bring the power of digital innovation to your business.

CLOSE
Modal window
Contract
Share this page