Take on any CX challenge with Pipeline+ Subscribe today.

How Will GPT-4 Impact Chatbot Usage?

How Will GPT-4 Impact Chatbot Usage?

/ Technology, Artificial Intelligence
How Will GPT-4 Impact Chatbot Usage?

Understanding GPT-4 and the implementation steps to take.

In March 2023, OpenAI – the research laboratory behind the wildly popular ChatGPT chatbot – released GPT-4, its newest multimodal model powering a variety of previously untapped use cases.

Before GPT-4, tools like ChatGPT were only capable of receiving text input by a user and would then generate a text response to whatever the user’s query was. You could ask ChatGPT to write poems, answer trivia questions, or even write essays.

While ChatGPT was (and is) certainly impressive, it has limitations and isn’t always accurate. GPT-4 is a better, more advanced version. It can analyze both text and images (which previous versions could not) and is significantly more creative and capable than its predecessor. But what does this advancement mean for the contact center industry?

ChatGPT and Chatbots

The immediate comparison is to chatbots. Contact centers have been deploying chatbots (often referred to as virtual agents) for decades as a way for customers to quickly self-service their needs.

Chatbots have historically been used to address simple questions:

  • What are a store’s hours?
  • Where is the store located?
  • What is my account balance?

Chatbots, then, are an effective way to help consumers address many of their needs without having to call customer support: and they work around the clock.

OpenAI’s chatbot has opened the conversation (pun intended) on how advanced artificial intelligence (AI) technology like GPT-4 will affect the future of chatbots.

While there’s no doubt that ChatGPT’s capabilities and popularity will lead to further growth in chatbots and an increased use of AI, there are several overarching aspects to consider when deploying traditional chatbots and driving chatbot acceptance.

For contact centers and their customers, failure to responsibly and ethically design a chatbot can be detrimental and potentially damage reputations. There is a clear process that must be followed to properly build and deploy a chatbot that aligns with the values of the business and respects the will of the customer.

When considering deploying a chatbot, it’s essential for businesses to understand their customers’ pain points and what would lead them to contact the business in the first place.

Next, the organization should determine whether that pain is suitable to be addressed by a chatbot or if it is best handled by a human.

Once a business has established that the selected customer pain point is suitable to be handled by a chatbot, it can then determine how to design their chatbot to be most effective.

For example, if customers are frequently inquiring about a hotel’s room availability, a chatbot can be trained to meet their needs and streamline the communications. The chatbot can answer questions about availability and even help future guests to make a decision on a room in the hotel that best meets their needs.

The primary benefits of chatbots are speed and convenience, so developing one that can quickly address customer issues can be a critical competitive advantage for organizations, leading to increased customer satisfaction and even driving sales and conversions. Chatbots are also affordable for organizations and can help to gain a deeper understanding of customers via data insights.

When considering deploying a chatbot, it's essential for businesses to understand their customers’ pain points...

For customers, chatbots offer a way to self-serve any questions they may have (rather than having to call in) and are incredibly convenient. Most chatbots also feature 24-hour functionality, so customers don’t have to wait to get answers.

That being said, it’s not always easy for businesses to figure out the core questions that customers are asking. Customers will contact organizations through a variety of channels (calls, emails, messaging, social media messages, and more). This can make it difficult to determine which customer problems would be best served by a chatbot versus a live agent.

However, generally speaking, chatbots are typically best used to respond to repetitive, straightforward questions, while live agents are best for more complex or sensitive inquiries (think medical conversations, banking questions, etc.)

LLMs and How They Work

Fortunately, modern contact center technology can develop Large Language Models (LLMs) – like GPT-4 -- that are created based on a history of customer call transcripts or chat conversations. GPT-4 is just the latest in a series of advancements around LLMs, but the ability to create similar technology has been available for a number of years.

Conversations with customers (including calls, chat, email, etc.) are recorded, transcribed, and analyzed to determine their contact reasons and pain points.

Contact centers can then use this data to develop custom LLMs that are unique to their businesses and can be used to create chatbots. These chatbots are customized to individual businesses and their customers, rather than being one-size-fits-all models.

This is the power of modern insights and data analysis tools within contact centers: the technology enables businesses to gain a deeper understanding of their customers, their motivations, needs, and pain points.

For example, life insurance customers will have different needs, questions, and legal requirements than customers for a retail company. It would be a mistake for them both to use a generic LLM to train their chatbots. Custom, individual chatbots that are trained on customer data are far more effective (and helpful) and can frequently anticipate customer needs and answer questions.

GPT-4 and LLM Challenges

Customers often use chatbots to seek quick answers to their questions. But poorly trained chatbots have the potential to frustrate customers by not being able to answer their questions, or even worse, answering them incorrectly.

The challenge with GPT-4 and other LLMs is that they are trained on massive wide-ranging public data sources and can lose domain specificity when deployed without proper guardrails. In short, they are not customized to the individual needs of a business and should be carefully and thoughtfully implemented. This can be a recipe for customer churn and create negative brand perception.

Additionally, many of the popular AI chat tools have doled-out remarkably bad or inaccurate (i.e., hallucinogenic) advice. Microsoft’s AI-powered Bing search engine, for example, made waves when it advised a New York Times reporter to end their marriage.

Clearly, businesses need to be diligent when employing AI-powered chatbots to ensure that they always give sound, accurate information. Chatbots providing inaccurate responses could lead to a series of legal and reputational challenges that have the potential to destroy a company.

Steps to Take

This is not to say that AI-powered chatbots are not effective. The key to building a chatbot you can trust is to not only first train it based on the private data of an organization, but also implement strong, well-thought-out guardrails on the questions they will and won’t address.

One of the most important steps is to add restrictions such as banned keywords and topics. This means that there are certain subjects that the chatbot won’t respond to or engage with, such as accepting Social Security numbers, revealing account information, or discussing medical records. This could extend further to anything that each individual business decides is beyond the scope of the chatbot.

...once a chatbot has been responsibly developed, it will continue to improve.

By limiting the functionality to a narrower set of questions and topics, businesses can avoid unpleasant situations where the chatbot might come back with an incorrect or embarrassing response.

The next step is to further define the chatbot by building in-depth, context-rich prompts with behaviors and outcomes learned from conversation data. Also, with specific verbiage for some common conversation parts (opening, closing).

It’s also worth remembering that once a chatbot has been responsibly developed, it will continue to improve. Brands can analyze the success of their chatbots – as well as any pain points consumers feel when using the chatbot. They can use those lessons to improve the automated process and continuously refine the bot.

Final Thoughts

ChatGPT – and now GPT-4 – have led to a chatbot/virtual agent gold rush, and businesses are unsurprisingly scrambling to capitalize on the movement and develop products that can better serve their customers.

GPT-4 is just the latest iteration, further improving and enhancing what chatbots can already do. There’s no question that this new innovation will lead to further chatbot improvements and increased acceptance of AI technology.

However, organizations rushing to embrace this novel technology must prioritize building a tool that is customized to their business and has guardrails. Both for the safety of their customers and to protect their reputation.

When following these steps, the incremental capabilities of GPT-4 over other LLMs like GPT-3 - (understanding more complex inputs, greater steerability, less likely to respond to inappropriate topics or requests) - can make for more effective chatbots. And by doing so increase chatbot usage and acceptance.

Scott Kolman

Scott Kolman

Scott Kolman is Chief Marketing Officer at Cresta. Scott has an extensive background in the SaaS and enterprise software and is a recognized professional with expertise in cloud contact center, customer experience, and customer service. Scott is deeply involved in the contact center industry and has spoken at various conferences.

Contact author

x

Most Read

Forrester GenAI Essentials Report 20240418
Upland 20231115
Cloud Racers
Interactions NK 20240422
Verint CX Automation
UltimateCX Microsite 20240418