Let’s begin here: Yes, the opportunities for Generative AI (GenAI) are immense. Yes, it is transforming the world as we know it (and faster than most of us predicted). And yes, technology is getting smarter. However, the implications for GenAI, with its ability to generate text, imagery, and narratives, on enterprises and businesses are very different from the impact on the general public — after all, most businesses don’t write poems or stories (which is popular with ChatGPT users), they serve their customers.
Many companies have experience with natural language processing (NLP) and low-level chatbots, but GenAI is accelerating how data can be integrated, interpreted, and converted into business outcomes. Therefore, they need to quickly determine which GenAI use cases will solve their most pressing business challenges and drive growth. To understand how enterprises can make GenAI enterprise-ready with their data, it’s important to review how we arrived at this point.
The Journey from NLP to Large Language Model (LLM)
Technology has been trying to make sense of natural languages for decades now. While human language itself is an evolved form of human expression, the fact that humans have evolved into so many dialects worldwide — from symbols and sounds into syllables, phonetics and languages — has left technology relying on more simple digital communication methods with bits and bytes, etc., until relatively recently.
I started working on NLP programs almost a decade ago. Back then, it was all about language taxonomy and ontology, entity extraction, and a primitive form of a graph database (largely in XML’s) to try and maintain complex relationships and context between various entities, make sense of search queries, generate a word cloud, and deliver results. There was nothing mathematical about it. There was a lot of Human in the Loop to build out taxonomy databases, lots of XML parsing, and most importantly, lots of compute and memory at play. Needless to say, some programs were successful, and most were not. Machine learning came next with multiple approaches to deep learning and neural nets, etc., accelerating natural language understanding (NLU) and natural language inference (NLI). However, there were three limiting factors— compute power to process complex models, access to volumes of data that can teach machines, and primarily, a model that can self-learn and self-correct by forming temporal relationships between phrases.
Fast forward two decades later, and GPUs deliver massive compute power, self-teaching and evolving neural networks are the norm, supervised/unsupervised/semi-supervised learning models all exist, and above all, there is greater access to massive amounts of data in several languages, including various social media platforms, that these models can train on. The result is AI engines that can connect with you in your natural language, understand the emotion and meaning behind your queries, sound like a human being, and respond like one.
We all, through our social media presence, have been unknowingly a ‘Human’ in the ‘Loop’ to train these engines. We now have engines claiming to be trained on trillions of parameters, able to take hundreds and thousands of input parameters, which are multi-modal and respond to us in our language. Whether it is GPT4/5, PaLM2, Llama or any other LLMs that have been published so far, they are emerging as more contextual verticalized problem solvers.
Systems of Engagement and Systems of Record
While the journey from NLPs to LLMs has been great thanks to the Silicon Evolution, data models and the availability of massive amounts of training data that we all have generated, Enterprises — retail providers, manufacturers, banking, etc. — each need very different applications of this technology. Firstly enterprises can’t afford AI hallucination — they need 0% hallucination and 100% accuracy for users who interact with AI. There are a range of queries that demand absolute accuracy in order to be of any business use — e.g. How many rooms are available in your hotel? Do you have a first-class ticket available?
To counter AI hallucination, enter the age-old concept of Systems of Engagement and Systems of Records. Systems of Engagement, be it with your customers, suppliers, or employees can leverage a GenAI-based conversational platform out of the box, after being trained for business-specific prompts — that’s the “easier” part. The challenge is embedding Systems of Records into the value chain. Many businesses are still in a static table- and entity-based world and will remain that way because most enterprises are static at an organizational or corporate level, while events and workflows make them dynamic at a transactional level.
This is where we talk about next generation conversational platforms that not only address conversations, interfaces, and queries, but also take customer journeys all the way to fulfilment. There are different architectural approaches to such conversational platforms. One immediate option is to use hybrid middleware that acts as a consolidator of sorts between vectorized and labelled enterprise data and LLM-driven conversational prompts and delivers a 0% hallucination outcome to consumers.
There is a massive amount of data prep work required by enterprises to make it intelligible for an LLM engine. We call it flattening of the traditional table and entity-driven data models. Graph databases, which represent and store data in a way that relational databases cannot, are finding a new purpose in this journey. The goal is to convert enterprise databases to more intelligible graph databases with relationships that define context and meaning, making it easier for LLM engines to learn and therefore respond to prompts from end customers through a combination of conversational and real-time queries. This task of enabling enterprise data to be LLM-ready is the key to providing an end-to-end Systems of Engagement to Systems of Record experience and taking user experiences all the way to fulfilment.
What Comes Next
At this point, with these advancements in data and AI, the most immediate impact comes in the area of software code generation — as evidenced by the rise of Microsoft Copilot, Amazone CodeWhisperer and other tools among developers. These tools are jumpstarting legacy modernization programs, many of which are often stalled due to time and cost concerns. With code generation tools powered by GenAI, we are seeing modernization projects accelerate their timetables by 20-40%. In greenfield code development projects, these tools will allow developers to shift time and productivity savings toward design thinking and more innovative projects.
Beyond software code development, GenAI tools are leading to the creation of new vertical use cases and scenarios that are aimed at solving enterprises’ most pressing challenges, and we are just starting to scratch the surface of what needs to be done to take full advantage of this trend. Nonetheless, we are already solving several problems and questions in the retail and logistics sector by leveraging GenAI:
How much inventory do I have in the warehouse, and when should I trigger replenishment? Is it profitable to stock in advance? Is my landed price right or is it going to escalate? What items can I bundle or what kind of personalization can I provide to elevate my profit?
Answering these kinds of questions takes a combination of conversational front ends, high accuracy data-driven queries in the back end, and a domain-heavy machine learning model delivering predictions and future guidance. Thus, my advice for enterprises would be, whether you are an AI explorer or a Generative AI disruptor, partner with service providers that have proven AI expertise and robust data and analytics capabilities which can arm you to capitalize on GenAI models suited to your business needs and help you stay ahead of the curve.
The post The Smart Enterprise: Making Generative AI Enterprise-Ready appeared first on Unite.AI.