Why AI’s Success Depends on Making It More Explainable and Conversational

Ciro Donalek

Written by Ciro Donalek

The rise of generative AI has opened up new ways for enhanced human-machine collaboration and groundbreaking discoveries. With its power to create new content starting from a prompt, such as images, texts, audio, video, or even other types of data, generative AI has the potential to revolutionize data analysis in many diverse fields.

However, despite being lauded as one of the most beneficial technological advancements in history, AI still hasn’t been widely adopted by organizations. There are three main reasons for this:

Resources: One of the primary barriers to AI adoption is the scarcity of expert users. Data scientists have become increasingly in demand as organizations seek to pull meaningful and actionable insights from the millions of bytes of data they produce each day. However, the data science talent pool is small and even when they do get hired, they’re simply juggling too many competing projects to focus on AI adoption.

Trust: While some still fear the doomsday potential of AI, what’s actually hurting trust in AI is confidence in the results it generates. Adding to that distrust is a lack of knowledge, even at a high level, of the proposed solutions and their implications. Hallucinations, for example, occur when generative AI bots return made-up information, but this issue usually stems from an incomplete query or inaccurate dataset. Without the right data science talent to train the model and help encourage AI adoption, it’s challenging to build up trust in the technology.

Usability: Many existing AI tools are challenging to use, typically requiring specialized knowledge and expertise in data science. Increased adoption requires user-friendly tools that analysts and other non-data science experts can leverage to gain a genuine understanding of complex, multidimensional, and heterogeneous data.

Democratizing AI with Intelligent Data Exploration

It’s clear that the barriers to widespread AI adoption ultimately stem from the scarcity of data science experts. Waiting to hire for these skills risks companies being left behind in the race to analyze enterprise data for business-changing insights. Fortunately, addressing these problems can be achieved through Intelligent Exploration, Explainable AI (XAI), and Large Language Models (LLMs). 

An intelligent data exploration platform, such as the one patented by Virtualitics, leverages XAI, Generative AI, and rich visualizations to guide users through the analysis of complex datasets. Low-code or no-code environments that allow users to log in and immediately begin exploring their data for insights are crucial to the adoption of AI inside the organization. Generative AI technology can be used to:

  • Use embedded AI routines to generate multidimensional visualizations based on available data and contextual information.
  • Deliver key insights in natural language augmented with compelling, AI-generated visualizations.
  • Use LLMs to suggest the next steps in the analysis based on user prompts. These prompts can be specific (“I want to understand what drives sales in summer”) or more open-ended (“Tell me something interesting about my data”).

Intelligent exploration leads teams to findings that can be clearly understood, prioritized, and acted upon.

Enabling a Conversational Approach to Data Analysis

Two of the keys to democratizing complex data analysis with Intelligent Exploration are the use of XAI and LLMs.

To be effective, XAI has to strike the right balance between model interpretability and accuracy. It is essential that we do not compromise accuracy while focusing on context-aware explanations, which entails designing explanations that consider the specific context of the analysis conducted.

Additionally, XAI systems must generate explanations suitable for different audiences that don’t require a data expert to interpret them. A few years ago, advancements in Natural Language Processing (NLP) introduced the capability to ask queries using natural language, but these systems faced limitations due to their restricted vocabulary and ad hoc syntax.

LLMs overcome these limitations by presenting results in the form of a narrative, featuring simple language and relevant charts that are generated automatically.

The AI-Friendly Future

Intelligent data exploration not only enables organizations to take advantage of their data without waiting to hire more data scientists, it also plays a critical role in augmenting trust in AI solutions, fostering a more widespread, fair, and ethical use of AI. This increased trust will contribute to more responsible deployment and wider acceptance of AI solutions across various domains, positively impacting society as a whole.

By enhancing resource allocation, providing transparent and interpretable AI solutions, and developing user-friendly tools, intelligent exploration platforms like Virtualitics pave the way for wider AI adoption and for all organizations to reap the benefits of these transformative technologies. By leveraging XAI, Generative AI, and rich visualizations to guide users through the analysis of complex datasets, Virtualitics is creating a future where teams are prepared to make AI-guided data analysis not only attainable but a powerful ally.

About the author

Ciro Donalek is a leading expert in Artificial Intelligence and data visualization. As a Computational Staff Scientist at Caltech he successfully applied Machine Learning techniques to many different scientific fields, co-authoring over a hundred publications featured in major journals (e.g., Nature, Neural Networks, IEEE Big Data, Bioinformatics). He holds several patents in the fields of AI and 3D Data Visualization, co-authoring the ones that define the Virtualitics AI platform.

Dr. Donalek is passionate about teaching and public outreach and has given many invited talks around the world on Machine Learning, Immersive technologies and Ethical, Interpretable and Explainable AI. He holds a Ph.D. in Computational Sciences / Artificial Intelligence (University Federico II of Naples, Italy) and an MS in Computer Science (University of Salerno, Italy). He is married with two children.