Warning: Declaration of c2c_ConfigureSMTP::options_page_description() should be compatible with C2C_Plugin_023::options_page_description($localized_heading_text = '') in /usr/www/users/pbtgrhbpah/martinsights.com/wp-content/plugins/configure-smtp/configure-smtp.php on line 171 Warning: Cannot modify header information - headers already sent by (output started at /usr/www/users/pbtgrhbpah/martinsights.com/wp-content/plugins/configure-smtp/configure-smtp.php:47) in /usr/www/users/pbtgrhbpah/martinsights.com/wp-includes/feed-rss2.php on line 8 Martin’s Insights https://www.martinsights.com Insights in business analytics, information management, business intelligence and data warehousing Wed, 16 Mar 2022 20:46:56 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.15 Unlock new business insights through data fabrics https://www.martinsights.com/?p=1616 https://www.martinsights.com/?p=1616#respond Wed, 16 Mar 2022 20:42:33 +0000 http://www.martinsights.com/?p=1616

Continue reading »]]> Share

Seeing as I’m currently working at large for a federated organisation with significantly different and siloed business streams that are managed through a plethora of different systems – ranging from 30-year-old mainframes to modern in-cloud platforms – the topic of data fabrics is very interesting to me. Even more so given how I’m coming from a database, data governance, integration, business intelligence, and insights background.

It should hardly come as a surprise that I got quite excited when I was pointed to this article on ITProPortal discussing data fabrics. The opening sentence was like music to my ears: ‘The central notion of data fabrics has arisen due to the distributed nature of modern data architectures and the increased pressure for data-driven insights.’

In a related article, data fabric is defined as ‘architecture that includes all forms of analytical data for any type of analysis that can be accessed and shared seamlessly across the entire enterprise.’

Its purpose, and again I quote, is to ‘provide a better way to handle enterprise data, giving controlled access to data and separating it from the applications that create it. This is designed to give data owners greater control and make it easier to share data with collaborators.’

According to Gartner, a data fabric architecture must support these four principles:

  • The data fabric must collect and analyse all forms of metadata.
  • It must convert passive metadata to active metadata.
  • It must create and curate knowledge graphs.
  • It must have a robust data integration backbone that supports all types of data users.

Discussions around active metadata-driven architecture make me especially excited. A colleague and I began our consulting careers by developing a metadata-driven database configuration and replication system and deploying it at two customers.

Understanding the data catalogue

Insights from the data fabric is centred on an active data catalogue. Users must quickly and easily find and access the data they need to obtain business insights. The data catalogue provides a repository for all technical metadata, a business glossary, data dictionary, and governance attributes.

However, the catalogue not only documents the data resource, but it provides users with an interface to view what data is available and what analytical assets exist. Once they have access to the data and insights, they can re-use these to make decisions. Alternatively, they can create their own analytical assets using the data or by adapting existing asset to their needs. In turn, this can be shared via the catalogue.

The catalogue is collaborative in the sense that if the data required is not catalogued, users can submit a request to get that data into the environment. This means that the technicians who build and maintain the data fabric must update the catalogue and notify the users of any additions, edits, or changes made to the data fabric, its data, or analytical assets. Data lineage and usage must be continuously monitored. This requires a level of modelling, cataloguing, and governance that must be continually applied.

Building blocks

There are several key components that contribute to the data fabric. These are the enterprise data warehouse (EDW), an investigative computing platform (ICP), and a real-time analysis engine. Data integration is key, including the extraction of data from sources and transforming it into a single version of the truth that is loaded into the EDW. The ETL, or more likely ELT processes, must create a trusted data source in the EDW. This is then used for producing reports and analytics. We’re obviously talking about a modern EDW here, catering for structured and unstructured data.

For the ICP (or data lake), raw data is extracted from sources and reformatted, integrated, and loaded into the repository for exploration or experimentation. This repository is used for data exploration, data mining, analytical modelling, and other ad hoc forms of investigating data.

In the past, the data warehouse and the investigative area were separated because they used incompatible technologies. But thanks to data storage being separated from computing, it is now possible for the data warehouse and ICP to be deployed on the same storage technology. This will see us ending up with a layered enterprise data hub.

Because it removes the requirement to physically move data from one data source to another, data virtualisation is often used in the data fabric to access data. This is often referred to as ‘data democratisation.’

Real-time analysis is a new area of analytics focused on analysing the data streaming into the company before it is stored. It is a significant addition to the range of analytical components in the data fabric. Data usage is often used to decide of where and in what format data is stored and analysed.

Of course, all the databases in the organisation form part of the data fabric environment. Look out for a subsequent post where I will discuss the advantages of this approach and technology.

]]> https://www.martinsights.com/?feed=rss2&p=1616 0
Telling powerful stories with data visualisation https://www.martinsights.com/?p=1614 https://www.martinsights.com/?p=1614#respond Thu, 10 Feb 2022 21:37:12 +0000 http://www.martinsights.com/?p=1614

Continue reading »]]> Share

Long-time readers of this blog know I’m a big advocate for using data visualisations to better narrate the ‘stories’ that are hiding inside organisational data. My interest was therefore sufficiently piqued when I came across this article on the Bulletin Expert Contributor Network. And while it doesn’t make any wild discoveries, I enjoy the way it provides seven ways to improve data visualisations.

I fully agree with the author’s statement that we often spend too much time and effort on improving our data manipulation and analytics skills while neglecting the visualisation side of things. Having good visualisation skills is crucial as it enables us to convey insights to non-technical, business-oriented decision-makers in a user-friendly and memorable format.

Keep it simple

While I hate the ‘stupid’ added to this common acronym, I do like the way the author uses the data-to-ink ratio as the measure to drive this point home. Way back when I did my data visualisation course with Stephen Few, we spent a lot of time focused on this and it is still very relevant today.

Choose the right chart

You must use the appropriate chart to get the right message across. The article shows which chart types work best in several practical scenarios. Although they mention line graphs under continuous nominal data, I would explicitly call out the use of line charts for time-related data.

While the article rightly mentions that you can use pie charts for small amounts of categorical data and for comparisons, we were instructed never to do this as the eye-brain coordination is bad at comparing shapes that aren’t horizontally or vertically aligned. We were also instructed never to use any 3D charts for the same reason.

Visualise one aspect per chart

If you try and display too many aspects on a single chart, the messages can often get mixed up. Having said that, some of the better visualisation tools let you easily use three aspects of a single variable on a single chart. For example, you can use a bar graph to represent the counts of a particular variable, and you can use the widths of the bars to denote another aspect, and the colour of the bars for a third aspect. You can even add average and median lines to the bars to give an indication of spread so long as it is all clear and easily understood.

Spice up your axis range

This part of the article is well presented with illustrations on how the appropriate use of the axis scale can be used to emphasis key differences, especially if those are relatively small in comparison to a larger measure, such as totals.

The article refers to this as spicing up the axis. I think that’s taking it a bit far. This is only about using the appropriate scale.

Use transformations to emphasise change

Again, there are nice, concise explanations with illustrations on how data transformations, such as logarithmic scales, can be used to emphasis small differences.

Scatter your points in your scatter plot

Another catchy heading to indicate how you can use different icon types and opacity when you have a lot of points scattered closely together. In more advanced visualisation tools, you can also use icon type, icon size, and colour density to convey related meaningful information about the variable being plotted.

Pick your palette wisely

This plays directly into the data-to-ink ratio as well when it comes to the attractiveness of the visual and the ease with which it can be used. When we first start designing visualisations, we tend to use bright and contrasting colours. This approach might be fine for a once-off proof of concept, but most people will get put off by the overly bright colours especially when using the graph multiple times per day. A softer, pastel-type palette with enough contrast is often softer on the eye and more comfortable to work with in the long run.

And then there is the type of device the visualisation is targeted for that must also be considered. Something that displays well on a 19-inch screen might not work so well on a smartphone, and vice versa.

My two cents

Overall, this is a nice and sensible article which highlights the key aspects to consider in visualisation. It does so succinctly, arranged under some catchy headings. I’m always glad when key information gets refreshed and re-presented to keep on reinforcing the points, as well as to present them to data scientists that are new to the field.

]]> https://www.martinsights.com/?feed=rss2&p=1614 0
Demonstrating value as a Chief Data Officer https://www.martinsights.com/?p=1612 https://www.martinsights.com/?p=1612#respond Mon, 31 Jan 2022 22:07:06 +0000 http://www.martinsights.com/?p=1612

Continue reading »]]> Share

The year has started with the proverbial bang. But even though the pace has been furious, there are many constants in which we can find comfort. One of these is how critical data is to every organisation regardless of industry vertical.

In my reading travels, I have come across this article published on Upside in June last year. In it, the author Hannah Smalltree examines how Chief Data Officers can help accelerate success at an organisation.

One of the things I like about it, is that she positions the article from the point of view of how an impact can be made by achieving quick wins. This is especially the case when you are new in the CDO, CDAO, or CAO role. We all know that it is much easier to achieve success in an established role or when you transition from a CIO position when you already have credibility in that position. But when you are new, or the role is new, this credibility must still be established. What makes this even more difficult is that you need to prove yourself against a backdrop of scepticism as well.

Embrace the cloud to shave months off modernisation

This is an interesting point. Using a modernised stack in the cloud for analytics, if done correctly, can be a massive time and cost saver. But as a new CDO, and especially a new CDAO/CAO, would this really fall under your remit? Would you have a say on infrastructure and systems from day one? In some larger organisations, this responsibility could fall under IT, the CIO, or a technology manager. In such a scenario, you can still request the services, illustrate the cost savings, and deliver results quicker, but it may just not be directly under your control.

Get the right people

I am a big advocate of this. Yes, you need the right people especially when it comes to the roles, skills, and cultural fit required in the organisation. However, you do not need to have a large (and costly) team in place. It is possible to get meaningful results faster and more cost-effectively by using a small, agile, and appropriately skilled team. Essential is the ability to communicate with the business; understand its requirements; understand, analyse, and move the data; and develop and productionise the analytical insights your team produces. Fortunately, this can be done with as few as three people if you jump in and get your hands dirty as well.

Introduce self-service analytics and data democratisation

Long-time readers of my blog will know that I have my reservations about self-service anything. It makes sense to empower data scientists and analytical modellers. But they should be in your team! My concern is about enabling business users to do their reporting and their own analytics especially if the insights are not validated and governed.

Deliver more use cases, faster

This is key to establishing your role and credibility quickly. But you must balance between delivering successfully versus taking on too many projects to manage at the same time. It is therefore important to align with the company’s crucial business priorities and ensure that the insights align with the needs of those initiatives.

An amazing (in terms of analytical wow-ness) but ‘useless’ (in terms of business benefit and value realisation) analytical insight is not going to do much to establish your credibility. Even more so when it is in the initial stages. It is therefore imperative you align with the business priorities of the organisation.

Demonstrate value quickly

For any analytics initiative, it is especially important to always analyse and report the value realisation. Analytical projects often get questioned about cost-effectiveness, money well spent, and so on. Demonstrating the realisation of value pro-actively not only eliminates those questions, but it also establishes your credibility.

]]> https://www.martinsights.com/?feed=rss2&p=1612 0
Unpacking Gartner’s latest tech trends https://www.martinsights.com/?p=1610 https://www.martinsights.com/?p=1610#respond Sun, 12 Dec 2021 21:12:53 +0000 http://www.martinsights.com/?p=1610

Continue reading »]]> Share

It is that time of year again when I like to #trendspot for the forthcoming year. And I’m always interested to read what Gartner predicts, so it was with great interest that I read their article titled Gartner Identifies the Top Strategic Technology Trends for 2022. My initial thought was, “Wow, that is a lot of deep stuff for an organisation to think about going into 2022, especially with everything else going on!” But then my eye fell on the line in the conclusion which reads: “This year’s top strategic technology trends highlight those trends that will drive significant disruption and opportunity over the next five to 10 years.” That’s more like it! So, with that context in mind, below I’ve shared my views on the trends identified by Gartner.

Artificial intelligence

Gartner identified two artificial intelligence (AI)-related topics – generative AI and AI engineering. The former centres on machine learning methods that learn about content or objects from data, and use it to generate new, original, realistic artefacts. For its part, AI engineering is an integrated approach for operationalising AI models – effectively putting AI solutions into productions to realise the value that they have been developed for.

Even though generative AI is interesting, the challenge is that it can potential be misused for scams, fraud, political disinformation, and forged identities.

It is AI engineering that really excites me. It can be applied to wider range of solutions that include advanced analytics. Too often we see amazing analytical and AI solution developed and evaluated to potentially produce impressive results. However, businesses then fail to put the solutions into production practice by integrating them into their operational and business processes.

Data fabric

Out of all the trends, this is the one I am most enthusiastic about. According to Gartner, data fabric is about the flexible, resilient integration of data across platforms and business users. It has emerged to simplify an organisation’s data integration infrastructure and create a scalable architecture. This reduces the technical debt seen in most data and analytics teams due to rising integration challenges.

We all know how complex data management, data governance, and data integration can become over vastly differing technologies. This is even more so the case when these are managed by different vendors across siloed business lines. For me, data fabric must be on top of the priority list for any business.

Autonomic systems; Composable applications; Hyperautomation; and Total Experience

Several of the technologies listed by Gartner are focused on putting more adaptable solutions in place in a much shorter timeframe using a variety of automation techniques. This highlights how businesses cannot wait for solutions through year-long analysis, development, testing, and implementation cycles.

Of course, it might be relatively easy to reduce the technical time. The stakeholder time perhaps less so. Getting business users across a large, siloed organisation to agree on priorities, requirements, data standards, governance, security, and privacy can be a time-consuming task. Even just getting the right people around a virtual conference table is challenging. Now add Total Experience to the mix where we want to improve the experience across customers, employees, business managers, providers, and other stakeholders then it becomes clear that it is on the people-side of things where the most significant obstacle remains.

Decision intelligence

The way Gartner describes decision intelligence does not make it seem to be a new technology. Rather, it is the reword of approaches supported by technology. It makes me think back on scorecards, dashboards, alerts, early warning predictions, data visualisation and other approaches and technologies used in this field.

I feel that technologies can certainly help in the decision-making process, but it only addresses one side of the coin. The other aspect that needs attention is the psychology of decision-making. Different people have vastly different approaches to and styles of decision-making. Add to that the group dynamic often found in boardrooms, and you have a dream project for any business psychologist. It will still be some time before technology is intelligent enough to assist in that arena.

That’s a wrap

While I did not cover all the topics mentioned by Gartner, such as privacy and security technologies, my focus is more on the data side.

It will be interesting to watch the developments over the next few years to see how these technologies evolve and become adopted by organisations. Of course, as the famous saying goes, nothing is constant but change. We will no doubt see this list change and evolve over the years.

]]> https://www.martinsights.com/?feed=rss2&p=1610 0
The 4 big steps to become data-driven https://www.martinsights.com/?p=1608 https://www.martinsights.com/?p=1608#respond Thu, 25 Nov 2021 00:50:26 +0000 http://www.martinsights.com/?p=1608

Continue reading »]]> Share

Readers of this blog are no strangers to me discussing the importance of becoming a data-driven organisation or the steps required to become one. However, a Forbes article really brings this all home by discussing the big four aspects which this could entail.

It all starts with a data culture. This is the collective behaviour and beliefs of people in how they use – or do not use – data for decision-making.

“To make sense of data culture, we need to understand how it fits into the overall corporate culture.”

Given the opportunity to turn information into a competitive advantage, businesses are no longer content being just data aware. Instead, they aspire to become data leading. But bringing about this change at a cultural level is difficult and time-consuming. After all, data is not just something that is purely technical but a component that is integral to the success of any modern organisation.

The four big elements highlight in the Forbes article encompass the following:

#1 Executive leadership owns and drives the use of data

I have seen so many organisations where the data stewards and data-lovers try and push the data culture and data governance agenda from the bottom up. But until someone at the board level grasps it and runs with it, those are almost guerrilla-like efforts in the trenches. While good, they are simply not as effective as spreading the message from the top. The article highlights a key point – the executive does not merely sponsor the data initiatives, but they take ownership of it.

#2 Data champions break silos between teams and promote collaboration

While we do talk about data champions informally, I have yet to come across a business that uses the term. But besides that, the point is that these data stewards are crucial to spread the message and do it well. Unfortunately, most of these data stewards have ‘day jobs’ like being a product owner, line manager, and so on. If they are to gain impetus to grow the data message, the business must make those aspects of their job formal with resources allocated to do so.

#3 Data is trusted, easily accessible, and freely shared

This is one of those points where we can write books on. At a fundamental level, data trust is built up by reporting and then managing data quality issues. I can also throw in my old hobbyhorse in here too – data that is properly catalogued is generally easier to find, access, and share.

#4 Data literacy is considered a critical skill for every role

This is of course a key point which I elaborated on recently. Writing the use of data into a job description and allocating appropriate KPIs is key to getting this done. The Forbes article highlights a key point:

“Data literacy is not relegated to just data and analytics teams. With a common language for data, people across business and technology teams can freely exchange ideas in a manner that is enabling rather than inhibiting.”

We need more businesspeople to understand the data points that are used to manage their part of the business and appreciate and contribute to the value it offers.

To summarise

The Forbes article concludes with the following:

“Often, organisations get intimidated by the magnitude of change needed and the subtle behavioural aspects that must shift. The key thing is to start small, secure easy wins, and continue building momentum over time.” Overall, this is a great article to introduce the topic to the executive. So, if you are unsure of how to approach transforming your company into a data-driven one, pass it around.

]]> https://www.martinsights.com/?feed=rss2&p=1608 0
Making data FAIR https://www.martinsights.com/?p=1606 https://www.martinsights.com/?p=1606#respond Sun, 24 Oct 2021 20:04:07 +0000 http://www.martinsights.com/?p=1606

Continue reading »]]> Share

Last month, I examined the importance of improving data literacy and briefly discussed five strategies organisations can employ to help achieve this. For my October piece, I want to build on these concepts and turn my attention to what makes data FAIR (findable, accessible, interoperable, reusable).

I receive a significant number of unsolicited emails from people advertising developers and testers (which I have never had any need of) to all kinds of weird and wonderful products and consulting services. A particular message piqued my interest that called on making data FAIR. As we know, life is not fair, so I simply had to click on the link.

This particular post was based on a 2016 article published in Scientific Data titled ‘The FAIR Guiding Principles for scientific data management and stewardship’. At the time, this was considered a call-to-action and roadmap for better scientific data management. Fast forward to the present day, and we now have the FAIR principles that are equally applicable to all the myriad kinds of data we must deal with daily.

The concept of FAIR data stems from trying to address the challenges identified in the scientific research community on how best to build on top of existing knowledge and securely collaborate on research data.

So, what does FAIR entail? You can read the linked post to understand the positioning of the author, and below I’ve provided some insights based on my own experiences.

Findable

Those who regularly read my blogs know that I have often posted about the importance of metadata, catalogues, and dictionaries. The FAIR post also highlights how important these concepts are. In practice, this is not difficult to implement. For instance, the company I currently work for has a search function that links to some of their catalogued systems. This makes it easy and convenient to find the data you are looking for.

Accessible

The FAIR post goes into quite a bit of technical detail on matters concerning access protocols and so on. Of course, these are important. But from a business reader’s perspective, these hold little value. Accessibility talks to users having well-documented, easy-to-use, intuitive tools to access the data and report and investigate it in a natural way. I have often blogged about the importance of data visualisation tools. Additionally, a company can consider adding an abstraction/business layer to make the data even more accessible to non-technical business users.

Interoperable

This is a very important part of the process. There are very few companies who only have a single system of record. So, the requirement is that data must be able to move between systems that often have different physical representations of that same data – hence the need for interoperability. What I like about the FAIR post is that the author also indicates that metadata must be equally interoperable. In other words, the metadata describing the data must just as easily be able to flow with the data, wherever it goes.

Re-useable

Data is the most valuable when it is fit-for-purpose. This implies that it must be reusable. A data value that is only used in a single system is only of value to a handful of users of that system. However, if the data flows on to reports, dashboards, and analytics, its value increases in significance. The more the data is used, the more valuable it becomes. The FAIR post also mentions metadata as part of the re-usable characteristic. Good metadata not only makes good data re-use possible, but it also gives much better payback on the costs and efforts of metadata management. While all these components might seem straightforward, ensuring all of them are present throughout the data process can prove to be challenging. But having an awareness of them can certainly help keep them top of mind.

]]> https://www.martinsights.com/?feed=rss2&p=1606 0
Improving data literacy remains important https://www.martinsights.com/?p=1604 https://www.martinsights.com/?p=1604#respond Tue, 28 Sep 2021 02:19:42 +0000 http://www.martinsights.com/?p=1604

Continue reading »]]> Share

We all know the importance of accessing ‘clean’ data and the hidden insights it contains. But despite this, in my experience, many companies are still not comfortable with the data literacy skills of their employees. Often, this comes down to not understanding where to begin when it comes to boosting data literacy across all the required levels of the organisation.

I recently came across this insightful article on Enterprise Talk where the author examines five strategies to enhance digital literacy at a business:

  • Empowering those on the edge
  • Make a commitment from the top down
  • Provide hard and soft skills training to all employees
  • Employees should be rewarded
  • Encourage IT collaboration

My sense is that the topic of data literacy is such an important one, yet it doesn’t always get the attention it deserves. These identified strategies can be very useful to a company who is wanting to boost data literacy – and so I couldn’t pass up the opportunity to express my additional views.

Personally, I am not fond of the term ‘on the edge’. Rather, I feel that it should be viewed as creating something that is fit for purpose. Employees must be empowered to get access to the information they need to perform their jobs, in a format they can understand and relate to. It comes down to making data useable for them.

But data literacy and the drive to become a data-driven organisation must filter down from the top. Yes, it is possible to start sowing the necessary seeds by data stewards at all levels of the organisation. However, to get it actioned throughout the business – and to be taken more seriously, the decision must come from the leadership.

Training will always be important. Even if the data literacy skills are on an acceptable standard, it must continually evolve as technologies change. The only way to spread the message around training is for people to start using and viewing data as an asset. Training must also incorporate things like data visualisation, data-based storytelling, understanding of data and graphs, and approaches to data quality management.

From a reward perspective, there are many KPIs related to data quality management that can be used to gauge success. Employees should be rewarded for achieving those set KPIs, and then the bar can be raised again.

IT collaboration must be encouraged. But there is also a responsibility on IT, BI, and analytics teams to ensure they understand the data, information, and insight requirements of the business. In turn, they must align their activities and projects to those requirements and the related business priorities. In other words, IT and data science must collaborate with business. It is not a one-way initiative.

A Centre of Excellence (CoE) around data literacy skills development plays a useful role if implemented and managed correctly. However, there are other forums that can also be established, managed, and encouraged. One that comes to mind is a Data Governance Council which steers the work and involvement of the data stewards throughout the business. In some instances, a CoE can be too inwardly-focussed. This is where a council can provide that outward focus required to spread the message and management of data throughout the business.

These five very useful strategies should not be considered as once-off initiatives. Instead, a company can embrace all of them and continually review and adapt them to suit the current needs of the organisation and to continuously build data literacy.

]]> https://www.martinsights.com/?feed=rss2&p=1604 0
Debunking data analysis myths – Part 2 https://www.martinsights.com/?p=1602 https://www.martinsights.com/?p=1602#respond Mon, 16 Aug 2021 23:47:28 +0000 http://www.martinsights.com/?p=1602

Continue reading »]]> Share

There are many myths surrounding data analysis, especially when it comes to identifying the best ways to drive business growth and competitive advantage. Last month, I mentioned this Forbes article that examines 14 myths in this regard, with my discussion turning to six of them. In my blog article this month, I will examine the remaining eight. Let me jump right in and start with number 7 of the myths listed.

7. Survey responses are fully reliable data

Survey data can certainly provide us with better insights into our customers’ perceptions of our business, support provided, available products, and so on. However, this can only happen if the survey has been designed correctly. There is a science to designing it in such a way that the submitted data is processable and can be analysed. So, even though opinion and free text fields are nice to get customer views, they can be a nightmare to extract quantifiable information from.

8. All data is good data

9. Data must be 100% accurate to be useful

In my view, these two myths are linked. All data is not necessarily good data. And what one department considers good might not be applicable to another department in the business. There are a myriad of ways to measure the quality of data. This includes accuracy, completeness, timeliness, and whether it is fit for purpose. The adage of ‘you cannot manage what you do not measure’ applies to data as well.

10. Once you’ve set up the model, you’re done

This is a myth that has been busted millions of times over. Analytical models are developed to improve business processes and decision-making. These improvements will affect the values of the variables used in the model. Over time, the insights of the model will diminish in value as the variables get affected by changing business processes. The models must therefore be recalibrated to point out new variables to focus on.

11. The questions to be answered must be settled up front

I believe that there are two aspects to this. Firstly, the question must be phrased correctly to get the analytical model or reporting component to answer it effectively. If the question is vaguely defined, then the insights will be vague as well. Secondly, discovery can also be done through exploratory analytics and visualisations. As the name suggests, a business must have a more open mindset to the data in this instance and let it show the ‘stories’ without applying preconceived notions to it.

12. Data analytics is a destination

Even though an organisation may have a specific question that needs answering, once that is done and the changes implemented, there will likely be more questions that must be addressed. Additionally, many companies do not begin with a destination in mind. It is more a case of starting and seeing where they end up. In this case, my view is that data exploration truly becomes a journey.

13. Knowing ‘the numbers’ is enough

While a data culture and mindset are important, interpretation and understanding are also required. Approaches like data visualisation become the ‘language of data’ putting it in a way that the company understands.

14. Our data is unique

While I agree with the author that the business should focus on the insights that would differentiate it, there are organisations (even though they are in the minority) that are different to those ‘standard’ companies. For instance, for the company I work for, at one stage we were trying to map standard insurance data models to businesses in both South Africa and Australia. However, the business models were so different that the data models did not fit. The same applies to analytical models. Sometimes they fit, but sometimes the data of an organisation is truly unique. It is therefore important to detect the difference and not force a square peg down a round hole.

As more businesses embark on digital transformation journeys, data analysis will continue to show its importance in leveraging insights that can be used by a business to create a sustainable competitive advantage. Understanding the role of data analysis and debunking the myths that exist become important to ensuring its success.

]]> https://www.martinsights.com/?feed=rss2&p=1602 0
Debunking data analytics myths – Part 1 https://www.martinsights.com/?p=1600 https://www.martinsights.com/?p=1600#respond Thu, 22 Jul 2021 02:21:58 +0000 http://www.martinsights.com/?p=1600

Continue reading »]]> Share

Experts in the data space know the importance of data and its analysis to help drive business growth and competitive advantage. Of course, data analytics is not something that can just be switched on and happen overnight. It does come with challenges, and overcoming those challenges are instrumental to a project’s success.

Further complicating matters are the many myths that surround this strategic function. I recently came across this Forbes article that highlights 14 of them. I found these to be interesting and the insights valuable, and so wanted to share my further views. I will examine the six in this article and finish up next month with the remaining eight.

1. The data will confirm what I already know

On the one hand, this is a good think. The data may confirm what the business already thinks it knows if it needs validation of that ‘gut feel’. However, when it comes to conducting data discovery, predictive analytics, and other forms of analytics that generate new insights, it is about gaining access to these new insights that were not previously known, that could improve business processes and strategy.

2. We can’t do this without a data scientist

The article rightly notes that there are plenty of tools and solutions available to perform data analysis without requiring a specialist – and this is certainly valid. Yet, I do believe that there comes a tipping point where a company cannot get any more insights from off-the-shelf, pre-configured analytics, visualisations and reporting solutions. In my experience, this is especially the case if data is spread across multiple systems where it is not always possible to integrate it without the expertise of a data scientist. Ultimately, to achieve a higher level of data insight a business does require someone with those data scientist skills.

3. Following where the data leads is scary

If analytics generates ‘scary’ or impossible-to-achieve insights, then my view would be that it is not performed correctly. ‘Good’ analytics therefore specifies that analytical insights should be actionable and related to the business strategy – and this should not be scary. While some analytical insights do affect the business strategy, if it expects too significant a change then it simply will not be considered actionable. An absurd example is if the analytics shows a retailer that it will make more profit per unit if it starts selling motor vehicles or real estate properties as opposed to food or other fast moving consumer goods.

4. Getting insights from data is simple

What a business gets out of its data analytics equates to what it puts in. Simple reporting and visualisation will yield simple results. Advanced analytics on complex data on the other hand can be difficult and will require skilled resources to get it right and to extract the most value for the business.

5. The more data, the better

As the article states, more data is not always better – something I strongly agree with. There seems to be a massive drive in the industry – maybe fuelled by the cloud storage vendors – to collect as much data as possible these days. However, this does not necessarily contribute to good insights. Prioritisation, relevance, and the quality of data are key.

6. Data equals knowledge

I believe that there is a big gap from data to information to insights to knowledge. Additionally, there is also the matter of data/information/insight maturity in the organisation to consider. A company that does not have a good grip on managing its data, can hardly be considered to have the ability to deal with advanced insights and the knowledge of how best to apply those.

Join me in part 2 next month when I will examine the rest of these identified myths associated with data analytics.

]]> https://www.martinsights.com/?feed=rss2&p=1600 0
How to deliver high quality data https://www.martinsights.com/?p=1598 https://www.martinsights.com/?p=1598#respond Fri, 25 Jun 2021 03:26:30 +0000 http://www.martinsights.com/?p=1598

Continue reading »]]> Share

As many of you are aware by now, data quality is something I am passionate about. After all, without ensuring the quality of its data, no company can make the right level of decision-making to improve operations, enhance the customer experience, and drive business growth.

Recently, I came across an interesting industry article that examines what companies can do to consistently deliver high quality data. The piece, 7 Steps for Consistently Delivering High-Quality Data, provides a sound and very beneficial approach for companies to follow. The steps also sparked my thinking on a few other details that can be considered, based on my experience, and so I wanted to share my views on each of these.  

1. Get the whole company involved

While whole company involvement is absolutely crucial, in my experience, this it is also one of the most problematic steps. I have seen clients grapple with the dilemma of how to establish a sound data governance function across a complex business structure and then still empower it to make crucial decisions that can be implemented and enforced by the data stewards across the various business lines. This is where ‘data champions’ (as the author suggests) become vital to help drive this complex, often politically charged piece of work inside the company.

2. Determine which data is necessary

This makes perfect sense. In a complex business environment, companies simply cannot tackle all their data at the same time. Prioritisation is essential. What it comes down to is having an honest look at the business benefit versus the cost of bad data.

3. Make an honest assessment of your database

While the author refers to database in the singular, I believe this is something that must be considered across ALL databases. It means companies must perform data profiling across every database in the organisation. Let’s be honest, very few companies exist with only one database in place. Most have the same data duplicated in some form across multiple environments, all with various degrees of consistence and accuracy.

6. Develop a data security model

Within the broader approach of ensuring data quality, I would recommend that this step (listed as 6) is moved higher up in the process. Understanding the security, especially around the privacy requirements in a very regulated world (think GDPR and POPIA), is just as important as conducting data profiling. A business must draw up a future state as well as a remediation plan to reach that future state.

4. Create a data backup plan

Of course, this is a crucial step, though it is not a singular process. Data remediation is often a drawn-out process that must dovetail with development and test releases going into production. And for each of these steps, a data backup is required to roll-back in the event of a problem.

5. Clean up your data

In reality, I believe that there are two parts to this step. Firstly, a company must clean up the mess that already exists – in other words, data remediation. However, it also must make the required changes to applications, interfaces, user behaviour, and the like to ensure those bad data practices do not continue to propagate more bad data into the environment. This is a never-ending process and cannot simply be ticked off the list.

7. Monitor and maintain data

Yes, this is an ongoing process especially as the big priority items get addressed. Companies must keep moving down the list of priorities.

While data quality can absolutely be achieved, the process around it never really stops. Companies that follow such steps and place data quality at the centre of their data journeys will reap rewards well into the future.  

]]> https://www.martinsights.com/?feed=rss2&p=1598 0