Understanding the benefits and challenges of self-service BI

Share

The concept of ‘self-service business intelligence (BI)’ started gaining momentum in the early 2000s. More than two decades later, a survey by Yellowfin has found that the majority of respondents (61%) say that less than 20% of their business users have access to self-service BI tools. Perhaps more concerning, 58% of those surveyed said less than 20% of people who do have access to self-service BI use the tool. In this blog, the first of a two-part series on the topic of self-service BI, I take a closer look into this interesting and challenging area in the wider data field and share my views.

Read the rest of this entry »

Data and Analytics in Healthcare – conference review

Share

I recently had the privilege to attend the Data and Analytics in Healthcare conference hosted by Corinium Intelligence in Australia. Following this, I wanted to use my blog piece this month to discuss some of the key lessons and insights shared during the event. I’ll jump straight in.

Data governance and ethics

Of course, it is easy to get carried away by the hype around generative AI, machine learning (ML), large language models (LLMS), and so on. Even so, it was sobering to see several presentations and panel discussions still focusing on data governance and ethics related to these topics. But given how we are talking about healthcare data this should not come as too much of a surprise.

A useful framework in this area is the Australian Digital Health Capability Framework and Quality in Data. This looks to align with existing industry-specific frameworks, ensuring that all health and care workers are empowered with digital capabilities. Concern was expressed that as much as 60% of AI and ML tools, especially cloud-based ones, share healthcare data with third parties without consent.

Additionally, delegates heard that data governance and adherence to ethics do slow the adoption of insights. There were examples where research results were not implemented for years due to the number of frameworks, data governance, privacy, and other controls that had to be followed. It was mentioned that legislation is often a step behind ethics, resulting in the need to reword policies down the line.

On the positive side, the sharing of ‘de-identified’ health data for research and outcome improvement was unanimously supported. One of the presenters compared this to sharing an organ for transplant. Why would anyone not want their de-identified data to be shared if it can improve the health outcomes of others facing similar circumstances?

Data management

Another area that was covered, which is close to my heart, was that of data sourcing and data management. It was stressed that to obtain advanced insights from data, it must be the right data, of high quality, and available in a processable format. The amount of unstructured data that is hard to mine and interpret in healthcare is staggering. In short, you need a solid data foundation if you want to use AI and ML effectively. This comes down to having data that is scalable, understandable, accessible, and fit-for-purpose.

One of the biggest challenges in healthcare data remains data linkage – joining the dots between related data in different datasets, originating from different systems often managed by different organisations. An interesting observation made was around the bias in healthcare – where we mostly collect data about sick or ill people. In fact, hardly any data is collected about healthy people within this context. This makes it difficult to identify what target populations for treatments, or comparative control groups’ variables, should look like. Making this more difficult is the fact that a lot of data is collected and stored but not used to its full potential.

Process and methodology

There were several good sessions centred on the processes and methodology to follow when adopting analytics, especially AI, ML, and LLMs in healthcare. While these were too detailed to cover here, the general sentiment was that one needs to be more careful and thorough about the design, evaluation, and interpretation of results. Especially in rare cases, do we have enough volumes of training data of sufficiently high enough quality for advanced models? Is the technology mature enough? And do we have proper processes for ongoing monitoring and improvement?

In other industries, people may get annoyed or even switch providers when an incorrect marketing campaign is fired off at them, or an inappropriate product is recommended. In healthcare, the implications of these ‘mistakes’ can have more serious implications, even life-threatening ones.

Building capacity and capability

Other sessions had interesting discussions on developments around capability and capacity building. My impression was that healthcare organisations in other countries are also scrambling for resources and funding. Key approaches to help overcome this include partnering, collaboration, and innovation across organisations and teams. The adage of start small and build on RIO shown came through as well.

Additionally, culture is key. It was mentioned that literacy and education take as much as 70% of the effort of adopting new technology and insights. The mind shift that must happen at the decision-making levels was also covered. Insights and data have to have a seat at the table.

An interesting study showed that both text analytics and AI only did an okay job of coding and classifying electronic medical records, with not a huge difference between the two. So, while it takes coding and classification experts hours to apply coding and classification to cases and diagnoses, there is a massive risk in replacing that expertise, insight, and interpretation with an automated process. You simply cannot automate the acquisition of health knowledge and interpretation.

The general message that came across was that AL and ML were efficient in reducing the administrative burden of clinicians and allied health staff. But despite some amazing (isolated) research outcomes, it was too risky and unethical to have technology make or influence diagnosis and treatment. However, healthcare is overloaded with administrative processes and many redundant data capture processes that can be automated to free up the clinicians and allied health staff to focus on what they are trained for and do best.

Conclusion

In closing, I didn’t review specific AI, ML or LLP case studies – they were very interesting and relevant and well presented, with lessons learnt, but it’s just too much detail to cover in this post.

It was another great and relevant event put on by the Corinium team. I walked away with many notes and some key aspects to incorporate in my strategic and operational plans going forward. I learnt about a few new concepts and made a few new connections too. I hope you find the above brief insights shared of value.

Of course, a nice venue and having proper barista-made coffee and wholesome food, together with networking drinks afterwards, rounded it off to make it an enjoyable experience. All in all it was a great and insightful day!

The importance of data lineage

Share

Do you know where your food comes from? Did the farmer use pesticides? Did the transport company spray preservative chemicals over your food? Did they keep it appropriately refrigerated? Would you eat food from sources you don’t trust? The same applies to data. Do you know what the lifecycle of your data entails? Was it manually entered? What validations were applied? Through how many transactional systems did it go, and was it transformed along the way? Would you make decisions based on data you don’t trust? This is where data lineage comes in.

Read the rest of this entry »

Data quality a priority for 2024

Share

Despite the hype surrounding generative Artificial Intelligence (GenAI), I am finding in my industry reading that many industry analysts are predicting that data quality (one of my favourite topics) will remain a key priority for this year – especially when it comes to data management and governance.

Read the rest of this entry »

Crystal-ball gazing for 2024

Share

We have approached that time of year when it’s always interesting to delve deeper into what the analysts see in their crystal balls when it comes to the trends and technologies to keep an eye on for the new year. I have reviewed several industry articles as resources around this, and let’s just say that if all these predictions come true, we will be in for quite a ride in 2024!

Below are just some of the ones related to BI and analytics that I found quite interesting and wanted to share with you.

Generative AI:

This is a term that everyone is very familiar with by now. In a recent Forbes piece, Bernard Marr writes that Generative AI is going to make a huge impact by taking care of most of our menial work. This includes ‘obtaining information, scheduling, managing compliance, organising ideas, structuring projects.’ Of course, he acknowledges that challenges remain around ethics and regulation that must still be solved.

In a Gartner review, Ava MacCartney reckons that ‘by 2026, generative AI will significantly alter 70% of the design and development effort for new web applications and mobile apps.’ While certainly plausible, I’d like to see what the figure is for BI and analytics. In the data sourcing and data engineering space, we are still doing a lot of manual labour that could be automated.

Imagine you can just say: “Get me the data from the CRM and the billing systems and integrate them on Customer ID!” and voila, there you have got data from 60 tables integrated and ready for analysis and to develop models on. “Now tell me which customers are about to churn and recommend a campaign that will entice them to stay.” Ah, we can dream. Gartner places Generative AI, together with Platform Engineering, AI-Augmented Development, Industry Cloud Platforms, Intelligent Applications, and Sustainable Technology under a banner called ‘Rise of the builders’. McCartney believes these technologies will boost the creativity of the communities involved in this type of work.

Developer experience (DevX)

In a sister Gartner paper, Lori Perry writes that ‘the suite of technologies under this theme focuses on attracting and retaining top engineering talent by supporting interactions between developers and the tools, platforms, processes, and people they work with.’ I am all for technologies that will make our data engineers’ work more pleasant. But while powerful, I wouldn’t call the user experience of data pipeline technologies enjoyable and highly productive yet. Perry cites the Value Stream Management Platform (VSMP) as an example of DevX technology that seeks to optimise end-to-end product delivery and improve business outcomes.

She also explores technologies like AI-augmented software engineering that can help software engineers create, deliver, and maintain applications. Furthermore, API-centric SaaS services could potentially be used as the primary method to access these technologies. There is also GitOps, which is a closed-loop control system for cloud-native applications, and other internal developer portals that enable self-service discovery that will increasingly come into the spotlight.

I’m looking forward to seeing these technologies in action to increase productivity and reduce human error in data management.

Responsible AI

In a review of the Gartner Data & Analytics Summit held in Sydney at the end of July, responsible AI emerged as a trend to watch. I like this positive spin on AI and Machine Learning as it covers many aspects of making positive business decisions and ethical choices when adopting AI. These include adding to business and societal value, reducing risk, and increasing trust, transparency, and accountability. Unfortunately, there are way too many case studies where AI and ML models have come up with ethically unsavoury or unusable insights.

Gartner predicts the concentration of pre-trained AI models among 1% of AI vendors by 2025 will make responsible AI a societal concern. The firm further recommends that ‘organisations adopt a risk-proportional approach to deliver AI value and take caution when applying solutions and models. Seek assurances from vendors to ensure they are managing their risk and compliance obligations, protecting organisations from potential financial loss, legal action and reputational damage.’

Data-centric AI

Another interesting topic in the same review is data-centric AI. This is more data-focussed than AI which is mostly based on models, algorithms, and code. Garner refers to data managed specifically for AI solutions. These include data synthesis and data labelling which are employed to solve data-related challenges, such as accessibility, volume, privacy, security, complexity, and scope. In my mind, this is not necessarily new technology, but rather a realisation by AI practitioners that there are aspects related to data governance that are equally important in the AI and ML fields.

What will be interesting is to see how the technology and practices are being adapted to function efficiently and effectively in more fast-moving and fast-changing environments. These environments are reliant on working with large volumes of data, and even data that was not sourced from within the organisation. There are some useful data governance and cataloguing platforms out there, but the challenge has always been to make them work productively at scale. I think it will be crucial for data governance systems to apply AI and ML themselves to function more effectively.

Of course, many technologies and trends also focus on privacy and security. While important, they have not been the focus of this post as I wanted to explore data-specific trends.

The uniqueness of modern data quality management

Share

Last month I addressed how data quality is perceived by different specialists inside the organisation. This month, I turn the spotlight onto what makes modern data quality management different from traditional approaches. Edwin Walker’s Data Science Central article ‘Difference between modern and traditional data quality’ provides an excellent starting point.

Read the rest of this entry »

Data quality is in the eye of the beholder

Share

The quality of the data we work with has a significant impact on the quality of the insights we can extrapolate for the business. Following on from my recent mini-series on the evolving data-related roles, it was interesting to come across Edwin Walker’s article on Data Science Central titled: ‘How do different personas in an organisation see data quality?’ Walker has also written on the topic of ‘modern data quality’ in another article available on the same site.

Read the rest of this entry »

The evolving role of the data scientist

Share

This month, I am wrapping up my three-part series on evolving data-related roles by focusing on one of the fastest changing and popular roles under discussion in the industry today – the data scientist.

Read the rest of this entry »

More evolving data-related roles

Share

Last month, I discussed the evolving roles of the Chief Data Officer (CDO), Chief Analytics Officer (CAO), and the Data Engineer. In my blog post this month, I will delve deeper into several of the other roles related to data engineering, as well as the evolution of the role of the Business Analyst.

Read the rest of this entry »

Examining the continually evolving data-related roles

Share

In my quest to continuously learn more about data-related roles, I have recently come across two very interesting articles that suggest that data-related roles will be changing throughout the remainder of the year. These pieces share some great insight, and I couldn’t resist the opportunity to share my additional views and so in this, the first of a two-part blog series, I will examine the Chief Data Officer (CDO) role, the Chief Analytics Officer role (CAO), and the evolving roles of data engineers.

Read the rest of this entry »

Older posts «

hope howell has twice the fun. Learn More Here anybunny videos