Conversational AI or generative AI?
In this article, we guide you through the debate between conversational AI and generative AI.
Of a completely different nature to generative AI, conversational AI aims to simplify interactions between applications (and other data sources, such as connected objects or websites) and their users. Business applications retain all their functional characteristics (capabilities, data, security, etc.), but access to them becomes immediate and straightforward.
Encounters of the third kind
Encounters of the first kind: Web
Technological developments in the 1990s (fast Internet, Web, JavaScript, XML, etc.) led to the advent of the Cloud in the early 2000s. They then encouraged the gradual spread of Web interfaces for business applications.
Encounters of the second kind: Mobile applications
Ten years later (2007), the arrival of Apple’s first iPhones and their “store” of applications offered rich, evolving mobile access. Clearly, they paved the way for a revolutionary and rewarding user experience.
Encounters of the third kind: Conversational access
In this new phase of bringing applications closer to their users, conversational AI is at the forefront. Indeed, both needs and uses have diversified. For example: booking a meeting room, declaring an absence, approving an invoice, reporting a quality defect, finding a procedure, and so on.
Enabling users to directly use “their words” to satisfy their requests brings a double benefit:
- Less effort and time wasted interacting with applications;
- Zero training. As language is a universal capability, everyone is immediately able to use their company’s applications in a direct and natural way.
A subtle approach
Compared with generative AI and its steamroller, conversational intelligence is a fine art. Each intention is carefully decoded from sentences or dialogues with the user, and attributes (dates, proper nouns, etc.) are extracted. All this is then transformed into an API call to the application. And the processing is the same as if the request came from the web interface or the mobile application.
No room for errors or hallucinations.
Useful to all
Whatever its sector of activity, business model or even size, a company’s strategy must take into account two crucial components.
- Improving employee productivity
- Increasing customer loyalty
Yet, according to Gartner, 57% of employees in direct contact with customers claim to be inadequately prepared to succeed in their interactions.
Generally speaking, employee use of and satisfaction with their applications is far from optimal.
- On average, 40 minutes a day are spent switching from one application to another;
- 47% of employees have difficulty finding the information they need to work effectively;
- 78% of employees do not master the tools they use on a daily basis.
Adaptability first and foremost
Conversational AIs face very specific constraints.
- Need for speed – Waiting between 5 and 15 seconds for a search in a knowledge base is perfectly acceptable. But for conversational access, the user expects a speed equivalent to that of an exchange with another person. That’s one to two seconds at most.
- Access to proper names – Whether for employees, customers, sites or departments within the organization: it’s not possible to be generic when responding to requests that are highly contextual to each company.
- Business vocabulary management – Each company has its own specific acronyms (or even processes), its own jargon… Here again, standard corpora are of little help, and the adaptability of the conversational solution is a quality of prime importance.
Desperately Seeking Typical use
There isn’t really a typical use for conversational AI. At least, there are so many possible uses that it’s hard to pick one over the others.
The fundamental reason is that language is universal. We think, design, specify, decide, organize our personal and professional activity and our interactions with others on the basis of language. And if digital technology hasn’t caught on sooner, it’s because of a lack of reliable and affordable technology, not a lack of need.
In the words of Gartner:
- Customer facing: access to e-commerce sites, customer support, recruitment, etc.
- Employee facing: here, all enterprise software is concerned. For example, HCM, CRM, ERP, TMS, resource management (meeting rooms, parking, etc.), specialized applications (lawyers, notaries, real estate, etc.) or generic applications (Microsoft suite, etc.).
An affordable, predictable pricing model
We advocate a pricing model for collaborative AI aligned with that of most business applications. In other words, a price per user, and not per transaction as with LLMs.
For a very low and predictable budget, it’s possible to “make” applications speak in all languages and on all available media (including voice).
Security and confidentiality
Les aspects de sécurité et de confidentialité sont prégnants pour les applications de l’IA conversationnelle.
Security and confidentiality are key issues for conversational AI applications.
Indeed, depending on the applications concerned, sensitive and personal data may be exchanged between users and business applications in the form of natural language. For example, in an HCM application, information on remuneration, employee appraisals and even health may be mentioned in conversations.
Environmental impact: the blacksmith and the watchmaker
Generative AI: the sledgehammer
The study of LLM energy consumption is a new subject. The results have not yet stabilized. However, the following recent articles offer some pointers:
- Boris Gamazaychikov, Senior Manager, Emissions Reduction at Salesforce:Reducing the Carbon Footprint of Generative AI
- Greg Smith (Managing Partner at Arthur D. Little) & al: Environmental Impact of Large Language Models
It seems that the environmental impact of generative AI is significant. Even if it remains modest compared to the overall impact of information technologies. We haven’t yet reached the situation of blockchains.
In addition, there are avenues for optimization (e.g. specialized LLMs), but it remains to be seen what industry players will do with them.
Conversational AI: the watchmaker screwdriver
While further studies are still needed, there’s no doubt that conversational AIs are two orders of magnitude less resource-intensive than generative AIs. And there are good reasons for this.
- Lightweight training – Conversational networks are much smaller than generative networks. In fact, they can be trained in just a few minutes (compared to the weeks and months required for LLMs);
- Optimized querying – Thanks to their specialization, conversational AIs are able to respond very quickly and with very few resources. This is in contrast to LLMs, which require gigantic networks even for trivial queries.
Conversational AI or generative AI: Each in her own role
Generative AI vs. conversational AI is not, therefore, a quarrel of words or of chapels (Hoagie vs. Sub? Cookie or biscuit?). But it is of a semantic nature: they don’t serve the same purpose.
As proof, it is possible to combine them:
conversational AIs can take advantage of generative AIs, for example as assistants for the tasks of administration and defining the NLP of each application ;
and generative AIs can improve their operational performance by using conversational AIs, in particular to limit the risk of hallucination, optimize costs and multiply means of access.
Comparing conversational AI and generative AI is a bit like comparing rugby and soccer: we’re not playing on the same field, nor with the same rules. So there’s no match. Which doesn’t mean we can’t experience unforgettable emotions with each of them, let’s face it, especially rugby 🙂
Did you like this article? Then we recommend that you read our previous article “Generative AI or conversational AI… Ready to rumble“
Do you have any questions? Contact us at contact@agora.software
Want to understand how our conversational AI platform
optimizes your users’ productivity and engagement by effectively complementing your business applications?