«

»

The importance of analytics in the data governance process

Share

When it comes to data, one of the areas that really interests me is the connection between data analytics and data governance. There is obviously a lot of material available on how data governance can assist organisations achieve better data analytics outcomes, however, a paragraph focusing on analytics-enabled data governance in this Precisely.com article ‘The Intersection of Data Analytics and Data Governance’ piqued my curiosity.

The article begins with something many of us can identify with – data exhaustion. This notion resonated with me especially at a time when we are battling to get organisations to properly embark on a data-centred journey. This Precisely story examines the idea of data management symbiosis where the concepts found in nature are applied to the data ecosystem – and the author highlights the connection between data analytics and data governance. What I like about this is that it points out how this connection is symbiotic, in other words, a two-way relationship.

Of course, properly implemented data governance plays a significant role in improving and guaranteeing good and useful analytical outcomes for an organisation. As part of this and something this article mentions, I am a huge advocate for the likes of a business glossary, data dictionaries and lineage, and metadata management. With these elements in place, like the author writers, users are informed about ‘the source, use, relationships, and definitions related to data -including business terms, attributes, and dependencies.’

More importantly, such approaches ensure that responsibility is assigned to the organisations data. In turn, this makes the data more accessible and encourages a comprehensive approach to data use inside the organisation while also improving the quality of the data.

However, for me, the crux of the article is summarised in these sentences: ‘With analytics-enabled data governance, machine learning algorithms can monitor and improve data quality across an enterprise, self-learning as issues are resolved. Improved data quality increases user trust in data reliability, and therefore increases data utilisation for analysis.’

In doing so, organisations can improve their regulatory stance as the machine learning algorithms can continually (and automatically) monitor for potential non-compliance issues. And by injecting the analytical component as an integrated facet throughout this process, the environment becomes more dynamic and assists decision-makers to proactively identify areas where potential violations can occur. This automated intelligence is an important business enabler when dealing with the complexities of the likes of the Generational Data Protection Regulation (GDPR) in Europe, the Protection of Personal Information Act (POPIA) in South Africa, and the Privacy Act in Australia.

Several years ago, a colleague of mine used prediction algorithms and trend analyses to monitor the call data files received from a mobile network. These are massive files containing all the details about calls and messages traveling across a mobile network. Often, there are multiple files running into millions of records per file, per base station on the network. Using the predictive algorithms, she could monitor the call data files and raise exceptions for investigation if the received files’ characteristics fell outside of what was predicted. This is an excellent example of using quite advanced data analytics to improve data quality management and to inform data governance initiatives.

There are many well documented sets of data quality metrics out there, many which can be augmented or analysed through more advanced data analytics applications. I may even dive deeper into this topic in future posts, so watch this space!

Leave a Reply

hope howell has twice the fun. Learn More Here anybunny videos