How GPT is driving the next generation of NLP chatbots

With AI and NLP scrambling to the innovative frontier of tech, we look at how the two combine to inform the newest GPT iteration for next-gen chatbots

Since OpenAI announced the launch of ChatGPT in December 2022, the rapidly-advancing technology has dominated headlines globally.

Today, the technology is being used by businesses to assist with crucial tasks, from enterprise support and customer interaction to product development. Capable of generating human-sounding text, the tool is a powerful one for the next generation of chatbots and, by proxy, omnichannel customer communications.

Large language models like GPT-4 or Google’s LaMDA use Natural Language Processing (NLP) to understand and respond to human-generated text inputs in a conversational manner. A subfield of AI that focuses on enabling computers to process and understand human language, these models utilise NLP techniques to analyse and interpret the text input it receives – including tasks such as part-of-speech tagging, named entity recognition, sentiment analysis, and language modelling. These NLP techniques help related tools understand the meaning, context, and intent behind the text input, allowing them to generate relevant, coherent responses in a conversational style.

With the field of NLP continuing to advance rapidly, the integration of GPT technology is propelling the next generation of chatbots to new heights. With their ability to understand and generate human-like text, GPT-powered chatbots are revolutionising customer interactions, virtual assistants, and other conversational applications. 

Powering the future of chatbot technology

The way companies talk to their customers can significantly impact retention and business growth. Three-quarters of US consumers told a study conducted by customer communications company Intercom earlier this year that communication that makes them feel valued is a top factor in continuing to do business with a brand.

The study of 1,000 US consumers found that three in four say communication that makes them feel valued is a top or the most important factor when doing business with a brand. A total of 64% went further and reported they would leave a business if they didn't feel valued. 

Furthermore, the study highlighted generational differences in the style and tone consumers want. Good customer support varies based on the person and situation, say the researchers, making it essential for businesses to deliver personalised communication grounded in context and a deep understanding of each individual customer.

As Colin Crowley, Senior Director of Customer Success at Freshworks, explains, the main purpose of a chatbot is to improve the productivity of customer-facing teams, increase customer satisfaction, and reduce the workload caused by live chat. 

“It doesn’t matter how impressive a product is or how clean a website looks – if a customer can’t get simple queries answered, updates on orders or speedy resolutions to their issues, they will leave feeling frustrated and disappointed with the brand,” he asserts. “The old adage that it takes months to win a customer and seconds to lose them has never been more true in the digital age, where customers expect instant gratification and fast responses.”

AI driving a new generation of chatbots

The widespread interest in large language models like GPT-4 has created huge gains in interest around NLP and deep learning, according to O’Reilly’s annual Technology Trends for 2023 report, which examines the most sought-after technology topics. 

The report shows that developer interest in generative AI is gaining momentum, with NLP being the most significant year-over-year growth among AI topics. In the world of NLP chatbots, one of the main roles that GPT tech is playing is improving the conversational quality and effectiveness of chatbot interactions. GPT-based chatbots can understand and respond to a wide range of queries and prompts from users, providing relevant and contextually appropriate responses. This has significantly enhanced the user experience, making chatbot interactions more human-like, engaging, and satisfying.

“The combination of chatbots and AI allows for far more advanced bots that can mimic human conversations, leveraging NLP, Natural Language Understanding (NLU), AI and ML technology to understand the intent of a query and offer solutions,” Crowley comments. 

“Thanks to AI, chatbots are transforming from rule-based solutions to virtual agents with strong conversational intelligence,” adds Leila Ghouddan, Global B2B Product Marketing Manager at Odigo. “These solutions can handle even more complex use cases thanks to advanced NLU capabilities: handling multiple entities and intents in one utterance.

“They are also capable of handling all nuances related to human conversations – including interruptions and multiple requests – and retain short-term and long-term memory to make conversations productive.”

Great care needed to deal with current limitations

With human-level performance on various professional and academic benchmarks, GPT-4 surpasses GPT-3.5 by a significant margin, exhibiting an increased ability to handle complex tasks and more nuanced instructions.

Despite its capabilities, however, GPT-4 has similar limitations as earlier GPT models. “Most importantly, it still is not fully reliable,” the OpenAI team says. “Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.”

Aaron Kalb, Chief Strategy Officer and Co-Founder at Alation, reiterates this final point: GPT should not yet be trusted to advise on important decisions. 

“That’s because it’s designed to generate content that simply looks correct with great flexibility and fluency, which creates a false sense of credibility and can result in so-called AI ‘hallucinations’. 

As Kalb – who, when working at Apple, was part of the founding team behind its groundbreaking Siri voice assistant – explains, the authenticity and ease of use that makes GPT so alluring is also its most glaring limitation: “Only if and when a GPT model is fed knowledge with metadata context – so essentially contextual data about where it’s located, how trustworthy it is, and whether it is of high quality – can these hallucinations or inaccurate responses be fixed and GPT trusted as an AI advisor,” he asserts.

“GPT is incredibly impressive in its ability to sound smart. The problem is that it still has no idea what it’s saying. It doesn’t have the knowledge it tries to put into words. It’s just really good at knowing which words ‘feel right’ to come after the words before, since it has effectively read and memorised the whole internet. It often gets the right answer since, for many questions, humanity collectively has posted the answer repeatedly online.”

Share

Featured Articles

Celonis Data Insights Driving BMW's Sustainability Journey

Celonis and the BMW Group have strengthened their partnership to optimise the auto giant’s processes and increase efficiency, productivity & sustainability

How Zoom is Using AI Innovation to Reimagine Teamwork

Zoom has announced Workplace, its new AI-powered collaboration platform, to help reimagine teamwork, facilitate connections and improve productivity

How SAP is Accelerating Deutsche Telekom’s Cloud Journey

Europe’s largest telecommunications provider Deutsche Telekom is using RISE with SAP to accelerate its journey to the cloud

Alibaba Cloud’s Dr Li Feifei: Combining AI & Cloud Computing

Data & Data Analytics

Mustafa Suleyman: DeepMind Cofounder is new Microsoft AI CEO

AI & Machine Learning

Nvidia Blackwell Aims to Continue Powering AI Acceleration

AI & Machine Learning