From Hype to Help: Making GenAI Useful for Enterprise Reporting and Data Analytics

min read

A data scientist makes a plea for intelligent, incremental use of generative AI.

a hand on a laptop keyboard with an overlay of multiple graphs
Credit: Deemerwha studio / Shutterstock.com © 2024

The advent of generative artificial intelligence (GenAI) technologies has dominated the tech world and much of popular culture for the last few years. Very serious people and companies say much about the potential of AI to fundamentally change the world—from multiplying worker productivity to altering the nature of human consciousness. Like many other office workers, I have started integrating GenAI into my regular workflows for coding, researching, and making silly images to send to my coworkers in Slack. But a revolution in workplace productivity these are not. So, the question is this: Will GenAI tools ever live up to their hype? And if so, how will their impact be felt?

I am not an expert in general workplace productivity, the future direction of human evolution, or most other areas where AI is supposed to make its mark. However, I have deep experience in enterprise analytics and reporting, especially within higher education. As a practicing data scientist who was trained as an astrophysicist, I have a solid theoretical understanding of how GenAI algorithms work. Based on this background and my hands-on testing of GenAI tools to date, I believe these tools have the potential to improve enterprise analytics applications in the immediate future, but it will be on an evolutionary, rather than a revolutionary, trajectory.

Anatomy of a Non-Revolution

First, let's be clear about the revolution that some are promising, many are hoping for, and I think is implausible. The dream is to obviate the need for teams of people to prepare data sets and build pre-designed reports and dashboards for users to consume. Instead, the goal is to feed all the data into an AI engine and let it make sense of the data in real time as users ask the questions. One outcome is saving the time and expense needed for data engineers to prepare data sets. Perhaps more importantly, however, the tyranny of the pre-designed report or dashboard could end. Say goodbye to clunky user interfaces with hundreds of reports hidden in a nest of folders, dozens of confusing dropdown parameters, and never the right data in the right format at the right time. Users could ask the AI engine any question about a data set and not be limited to questions someone else decided to include when building a report.

This utopian vision is not coming any time soon. Below, I outline the many ways that GenAI tools will struggle to even approach this capability. I also point out some of the larger pitfalls and why some aspects of the dream scenario might have unintended consequences that make them less desirable than they initially appear. However, GenAI can still be a powerful tool for improving the user experience and back-end efficiency of an enterprise data analytics platform if it is employed judiciously and with the right supporting structures.

An Unpopular Opinion About Enterprise Data Analytics

Enterprise data analytics is hard. Enterprise data analytics should be hard and will always be hard. Anyone who says otherwise either doesn't understand the practice of reporting or is selling a bill of goods.

I realize that this conclusion might sound self-serving. After all, I have made a career in data analytics, and I have a vested interest in making what I do sound irreplaceable. Or perhaps I am naive and can't see the big picture because I am stuck in the weeds. But hear me out.

Enterprise data analytics is hard because higher education institutions are large, complex businesses, and their enterprise data assets must match this scale and complexity. My former institution had a multi-billion-dollar annual budget; over fifty thousand people might set foot on campus on a given day—making it large but not huge by higher education standards. Think of the enormous number of business processes and data transactions required to make that system work. Think of all the subtle nuances in the data, all the workarounds employed because the standard processes don't quite fit, and all the edge cases where decisions must be made to categorize data for reporting. Good enterprise reporting needs to account for all of this to provide reliable and accurate data. Moreover, data reporting is extremely context-dependent. There are good reasons why the compliance office, registrar's office, and budget office all have different numbers for how many students were enrolled last year, even if this results in no single definitive answer to last year's enrollment count.

GenAI, in anything like its current incarnation, cannot be brought in to suddenly make enterprise reporting easy. The scale of the business complexity is far beyond the scope of any currently available AI tool. However, as GenAI advances over the next few years, significant progress in this area is foreseeable. It's even likely that back-end systems will be adapted to account for the ever-growing ability of GenAI tools to handle complex cases. What I see as a more intractable issue in the reporting space is that GenAI is fundamentally not designed to have highly reliable and repeatable accuracy. Think about the "hallucination" issues that have gotten so much attention with chatbots and search engines and how disruptive that would be in a data analytics context.

GenAI Strengths and Weaknesses

Even if AI can't fundamentally alter the dynamics of enterprise data analytics in the foreseeable future, there is a path forward for leveraging GenAI tools in this space. The key will be to implement a robust underlying data infrastructure, which provides a structure that the AI can use to interpret the data while creating guardrails against misuse and overreach. This infrastructure is called the semantic layer because it is where documentation of all the data structures, business rules, and other foundational information GenAI tools need to use the data appropriately is located. Fortunately, the semantic layer required for GenAI tools dovetails quite nicely with practices that have long been best practices for human users.

The core GenAI technology that is likely to be employed for enterprise analytics applications is the large language model (LLM), the same type of algorithm that underpins ChatGPT, Google Gemini, and other popular GenAI chat and search tools. The basic concept is called "text-to-query," which means that the AI tool translates a user's question into computer code and then runs that code to generate the answer. Many variants of the theme exist, but the premise is to improve the user experience by enabling them to ask questions in natural language, thereby avoiding complex reporting interfaces. The principle is solid, and current GenAI tools have demonstrated the ability to handle simple use cases.

However, I am deeply skeptical of whether this approach can scale to more complex and realistic data sets, at least not without a significant semantic layer set up by data engineering teams to enable the GenAI engine. At least four interrelated but distinct areas that are critical to sound enterprise data reporting exist, which GenAI tools will struggle to overcome without the help of a robust semantic layer. Let's examine each of these through the seemingly simple question: "What was our tuition revenue last year?"

1. Context

Data reporting is highly context-dependent, and human language is often imprecise. Do we mean tuition assessments or receipts? Last fiscal, academic, or calendar year? Tuition only or tuition and fees? All are valid choices, and every combination is potentially useful in a certain context but simply wrong in another. For instance, the bursar probably needs to slice and dice the data a hundred different ways, whereas there is exactly one number that matches the audited financial statements.

The average user, however, will almost certainly not fully understand these nuances. So, they will inevitably ask the question in imprecise language that does not specify all the context. How can the GenAI know which variant to choose? For each variant, how does it know the precise business rules to apply? A semantic layer can help to alleviate these issues by specifically encoding each of these different tuition revenue variants. Moreover, it should include a data dictionary component that explains each variant to the end users in at least enough detail that they can make sensible decisions about which one meets their particular use case.

2. Transparency

In a well-governed data-reporting system, any results can be traced back to the underlying data they are based on, and the rules used to generate the results are clearly stated. Once the GenAI tool provides an answer, how does the user know what business rules were applied to generate it? Can staff drill down to see the underlying records? Perhaps most difficult of all, how are results validated?

The technical issues of code visibility and drill-down are trivial to solve. However, the human end users who get the results may not be able to read the generated computer code, and they almost certainly don't have the time, inclination, or skill set to do a row-by-row validation of every new query they run. What they need instead is access to a validated and governed data set where the complete set of business rules has been documented and verified by subject matter experts. Building a semantic layer for GenAI reporting tools to access that incorporates this information is critical to enable transparency and the trust it engenders.

3. Consistency

Generating reliable and consistent results is the sine qua non of enterprise reporting applications. When two users try to ask the same question, do they get the same result? The answer too often is "no" in traditional reporting systems, but GenAI only exacerbates the problem. Even if the AI parses context and provides full transparency into its machinations, it should apply business logic the same way every time. Ensuring this happens is especially true when considering the vagaries of natural language and the many subtle differences in phrasing that might be used to ask essentially the same question.

The key to providing consistent results is ensuring that the GenAI tools adhere to the data encodings and definitions provided in the semantic layer. Being able to ask completely novel questions and having the AI generate business rules on the fly may sound enticing, but that means never being sure the results can be replicated. Consistency requires choosing from predefined business rule variations, along with various dimensions for slicing and splitting the data, all of which must be encoded in the semantic layer.

4. Scope

Any given data set, by definition, has a limited scope of applicability. To risk stating the obvious, if we want to measure tuition revenues, the data set needs to have tuition as one of the variables it measures. And if we want to find out last year's tuition, but our table only includes currently enrolled students, that will likely be a problem. Will the GenAI be able to understand the scope of the data available to it and know if it is able to answer the question asked?

A critical test of GenAI tools for analytics will be their ability to understand the limitations of the data and not "hallucinate" an incorrect answer. The GenAI reporting application needs to know when to say, "I don't know." This "awareness" of limitations may be one of the most difficult issues to solve. However, doing so will certainly involve a semantic layer that adequately defines the relevant scope of the data, providing the guardrails for the GenAI to stay within.

Data Engineering: More Critical Than Ever

As of mid-2024, GenAI tools for enterprise data analytics and reporting applications are in their infancy. One cannot simply take a GenAI tool off the shelf and point it at a data set, even a tool with a robust semantic layer, and get satisfactory results. While it is clear that the semantic layer will be critical to enabling the success of GenAI tools, many of the details of how to optimally structure it are still taking shape. This also means that the growing prominence of GenAI tools will not obviate the need for data engineering teams; it will make their work more critical than ever.

The crux of the matter is that GenAI is not an "easy button" for enterprise data analytics. I genuinely believe that it will lead to easier access to data, better user experiences, and, ultimately, more and better data-informed decision-making on campuses. Getting to that point will take time and experimentation as these new and evolving tools are integrated into existing reporting ecosystems. And it will rely even more heavily on having a highly structured and well-governed data model with a robust semantic layer as the linchpin of the data-reporting environment.

Better Is Better than Revolutionary

GenAI is not a revolutionary new technology that will completely overhaul my professional life in data analytics but rather a new and potentially useful tool to add to my toolbelt. My guess is that its impact on enterprise reporting and analytics will be akin to the advent of visual business intelligence tools like Tableau and Power BI a decade ago. While I love them and can hardly imagine going back to a time before them, they didn't completely change the nature of enterprise reporting or obviate the need for data engineers and business analysts to prepare data for end users to consume.

My frustration is seeing so much effort spent by smart people and great companies trying to get GenAI to work for moonshot projects that are improbable at best and entirely ill-conceived at worst. Meanwhile, I see far too little effort being spent on using GenAI for more prosaic but immediately useful incremental improvements. While some may have fully bought into the AI hype, many seem scared of being labeled boring if they aren't trying to upend the entire enterprise data ecosystem. Others may be looking for a shortcut to avoid the hard work that enterprise reporting necessarily entails.

My advice and plea: Let's stop trying to bring on a revolution that is unlikely to pan out, and let's stay laser-focused on doing whatever we can today to make things better than they were yesterday.


Craig Rudick is Data Scientist and Director of Product Strategy at HelioCampus.

© 2024 Craig Rudick. The content of this work is licensed under a Creative Commons BY 4.0 International License.