Making data FAIR


Last month, I examined the importance of improving data literacy and briefly discussed five strategies organisations can employ to help achieve this. For my October piece, I want to build on these concepts and turn my attention to what makes data FAIR (findable, accessible, interoperable, reusable).

I receive a significant number of unsolicited emails from people advertising developers and testers (which I have never had any need of) to all kinds of weird and wonderful products and consulting services. A particular message piqued my interest that called on making data FAIR. As we know, life is not fair, so I simply had to click on the link.

This particular post was based on a 2016 article published in Scientific Data titled ‘The FAIR Guiding Principles for scientific data management and stewardship’. At the time, this was considered a call-to-action and roadmap for better scientific data management. Fast forward to the present day, and we now have the FAIR principles that are equally applicable to all the myriad kinds of data we must deal with daily.

The concept of FAIR data stems from trying to address the challenges identified in the scientific research community on how best to build on top of existing knowledge and securely collaborate on research data.

So, what does FAIR entail? You can read the linked post to understand the positioning of the author, and below I’ve provided some insights based on my own experiences.


Those who regularly read my blogs know that I have often posted about the importance of metadata, catalogues, and dictionaries. The FAIR post also highlights how important these concepts are. In practice, this is not difficult to implement. For instance, the company I currently work for has a search function that links to some of their catalogued systems. This makes it easy and convenient to find the data you are looking for.


The FAIR post goes into quite a bit of technical detail on matters concerning access protocols and so on. Of course, these are important. But from a business reader’s perspective, these hold little value. Accessibility talks to users having well-documented, easy-to-use, intuitive tools to access the data and report and investigate it in a natural way. I have often blogged about the importance of data visualisation tools. Additionally, a company can consider adding an abstraction/business layer to make the data even more accessible to non-technical business users.


This is a very important part of the process. There are very few companies who only have a single system of record. So, the requirement is that data must be able to move between systems that often have different physical representations of that same data – hence the need for interoperability. What I like about the FAIR post is that the author also indicates that metadata must be equally interoperable. In other words, the metadata describing the data must just as easily be able to flow with the data, wherever it goes.


Data is the most valuable when it is fit-for-purpose. This implies that it must be reusable. A data value that is only used in a single system is only of value to a handful of users of that system. However, if the data flows on to reports, dashboards, and analytics, its value increases in significance. The more the data is used, the more valuable it becomes. The FAIR post also mentions metadata as part of the re-usable characteristic. Good metadata not only makes good data re-use possible, but it also gives much better payback on the costs and efforts of metadata management. While all these components might seem straightforward, ensuring all of them are present throughout the data process can prove to be challenging. But having an awareness of them can certainly help keep them top of mind.

Leave a Reply

hope howell has twice the fun. Learn More Here anybunny videos