Menu

The Benefits and Limits of LLM and Prompt Engineering in Business Processes and Customer Service

Stephen Booze
Sep 24, 2024 4:55:47 AM

Large language models (LLM) and prompt engineering have finally taken the AI center stage as the nexus point of human-machine interaction and business process with ChatGPT as the most famous example. Language tasks account for 62% of employee work time and 40% of that total can be made more productive with augmentation and automation, according to a recent Accenture Study.

Although the first use of an LLM could be cited as the introduction of the Eliza Chatbot in the late 1960s, it wasn’t until the late 90s that things heated up with Long Short-Term Memory (LSTM). But it is the sophistication and size of the LLM behind the GPT (Generative Pre-trained Transformer) that made business and the world take notice in 2022. The training dataset for the current ChatGPT-4 is 100 million parameters, or five times that of its predecessor, at 45 TB. 

ChatGPT and other LLM solutions have resulted in groundbreaking natural language processing (NLP) tasks. Since then, things have quickly moved on from its basis as a general purpose chatbot to even more sophisticated deep learning AI with data sets as large as 65 billion parameters. This expansion provides businesses with much more powerful tools that advance data and communication areas like Robotic Process Automation RPA, customer service, and data analytics in profound ways. But like all technologies that evolve quickly, LLM and prompt engineering have limits and benefits at all stages of discovery and adoption.

Generative AI is the broad term used for the form of AI focused on producing original content on demand rather than just analyzing and classifying existing data. Complex machine learning systems, known as foundation models, are trained on massive data sets that are made up of text, images, audio, or a mix of data types. Using training sources consisting solely of text is the basis for LLMs, which are a subset of foundation models.

Most LLMs like OpenAI’s GPT-4 are trained as next word or content prediction engines that businesses use without modification. Today, many businesses are using customized LLM algorithms designed to understand the specifics of their industry language understood by their users. End users access most LLMs via an application programming interface (API) that enables user creation of parameters that guide the LLM response. This is where prompts and prompt engineering play a vital role.

The question presented to the chatbot/LLM for text is known as a prompt, which must be highly specific and accurate to get the best response. Prompt engineering is the term for the crafting of the prompt. By carefully crafting prompts, users can enhance an LLM’s relevance, accuracy, and responsible usage of Generative AI in various applications. 

For now, the focus will be on the use of LLMs and prompt engineering in the enterprise where ChatGPT-4 is being increasingly used. This is mostly with enterprise data in conjunction with business process automation (BPA), RPA, or to support customer service or data analytics. This requires fine tuning based on industry specifics and can yield tremendous benefits to businesses. 

The datasets of an LLM can be vast, and prompt engineering has many refinement possibilities. This is integral to the adage truth of ‘garbage in, garbage out,’ which can apply in ways that limit their accuracy and efficacy.

LLM, BPA and RPA Integration Benefits

LLMs have become the latest technology tool enabling useful business automation via use and integration with Generative AI solutions like ChatGPT. The broad use of ChatGPT may have peaked earlier in 2023 with Reuters identifying it as having the fastest growing user base at over 100 million. Despite a leveling off, it is finding new uses and benefits in the enterprise with RPA and BPA. The many uses and benefits of RPA have been explored in earlier blogs here, here, and here, but LLMs like Chat GPT and others have opened new possibilities.

It’s important to note that Generative AI and RPA are distinct forms of automation that can affect business processes in different ways. Generative AI and today’s LLMs provide a tremendous amount of flexibility, adaptability, and creativity to RPA and BPA in terms of making decisions. While RPA and BPA can automate business processes, reduce costs, and increase productivity, Generative AI enables greater integration across disparate business processes with self-learning and adaptation capabilities. This can include everything from CRM and Microsoft suite integration to data analysis and reporting tasks.

Integrating BPA with LLM and prompt engineering via ChatGPT can drive evaluation of specific datasets to optimize customer portfolios in sectors like healthcare, finance, retail and many more. According to that same Accenture Study, 97% of global executives say AI foundation models will connect across data types to revolutionize AI use. As the lines between business processes, BPA, RPA, Generative AI, and LLMs disappear, they foster the greater integration across processes connecting the broader area of customer service.

LLM for Customer Service and Support

Labor, volume, velocity, and high-level decision making have been the major challenges of customer service in the digital age. RPA has gone a long way to support customer service through a myriad of ways ranging from data management to agent support, but integrating Generative AI, LLMs, and prompt engineering takes it to the next level.

Generative AI, LLMs and prompt engineering interfaces like ChatGPT can seamlessly blend RPA and BPA processes with many other customer service tools and repositories. These include brand-specific expertise, customer interaction archives, predefined parameters, and personalized action identification across the enterprise that can optimize customer service. This integration can enhance:

  • Automation-driven customer feedback analysis
  • Positive, negative, and mixed answer calculations for received messages via bots sending feedback lists to ChatGPT for customer service and product development
  • Disparate data source integration for complete customer-need pictures
  • AI-powered chatbots to automate IT help desk support for more immediate response and lighter tech team workload
  • Tracking and providing customers with real-time updates on order status
  • Streamlined sales, marketing, and customer support integration
  • Automating follow-up emails, scheduling appointments, and tracking sales activity 

Customer service will always be a delicate balance of benefits and challenges where mistakes can be enormously costly to businesses. While the combination of Generative AI, LLMs, and prompt engineering can deliver countless new benefits, they currently have limitations in customer service, such as:

  • Answers can have factual inaccuracies, fabricated information (known as hallucinations), and biases based on the data of the LLM or the context of its use
  • Proprietary data-sharing concerns that can lead to data breaches and adversarial attacks

Keeping humans in the loop is currently the best way to mitigate these potential problems until the next generation of LLMs can solve this challenge. Ultimately, businesses must have the right level of understanding and expertise to implement effective LLM and prompt tools/applications into customer service. The reality that customer service along with BPA and RPA are all driven by data analysis is what makes LLM beneficial while also exposing its current limitations.

LLM for Information and Data Analysis

Data will always be the foundation of Generative AI, LLMs, and prompt engineering, so data analysis and analytics are inherent to its use as well as being an end unto itself. RPA, BPA, and customer service all rely on business and customer data to create everything from effective chatbots and human/machine support to the data analytics that drive long-term business decisions.

Integrating ChatGPT with business data sources and applications like content management systems (CMS), customer relationships management (CRM), and enterprise resource planning (ERP) can drive data analytics in new ways. Just a few ways include:

  • Lead generation and qualification
  • Automating the gathering and analysis of marketing data to improve marketing campaign effectiveness
  • Customer data analysis for customer need anticipation via personalized support and recommendations
  • Automating the customer segmentation process based on behavior and interests to tailor marketing messages to specific customer segments
  •  Track website traffic, social media engagement, and open email rates to deliver data-driven insights
  • Analyze data from multiple sources across complex data ecosystems
  • Demand forecasting
  • Risk scoring and fraud detection

This is only a fraction of the current possibilities with some vendors creating new LLMs that can do far more and new applications and integrations coming on the scene daily.

Defining Business LLM and Prompt Engineering Use Focus

It’s important to remember that even with new Generative AI, LLM, and prompt engineering sources, solutions, and applications appearing almost weekly, this is the beginning of the journey. RPA, BPA, and customer service are just three broad areas where new possibilities are constantly being discovered, and the Venn diagram of the three areas continues to overlap to a greater degree.

New and integrated LLM-driven applications are popping up constantly such as solutions like Microsoft 365 Copilot that integrates ChatGPT with the Microsoft suite. An experimental LLM-driven application known as AutoGPT can autonomously achieve any goal the user sets through a combination of RPA+AutoGPT to analyze any existing task, discover potential problems, and deliver the answer.

The caveat is that the use of off-the-shelf Generative AI and LLM, or their use in new integrated applications, is a complex process. Most organizations will need support in developing the roadmap based on the desired outcomes and business use case. Ideally, a partner with in-depth experience in those areas such as RPA, BPA, customer service and solutions like ServiceNow. This approach can enable the company to start from the inside out and determine the best approach that meets their needs based on outcomes, budget, and time.

Thank you for delving deep into the realms of AI, LLM, and integrated applications. If the vast possibilities have piqued your interest and you wish to discuss their real-world implementations, best practices, or future trajectories, please consider connecting with me on LinkedIn.

For a more in-depth chat, I encourage you to reach out directly. Whether you have questions, insights, or collaborative visions, I'm eager to hear. Together, we can chart the frontiers of technological advancement and business adaptability.

You May Also Like

These Stories on AI