Tech Corner

NLU Disambiguation - What to do when the NLU is not sure

Kinshuk Kar
Jul 20
5-6 mins

Table of contents

Key takeawaysCollaboration platforms are essential to the new way of workingEmployees prefer engati over emailEmployees play a growing part in software purchasing decisionsThe future of work is collaborativeMethodology

What is disambiguation?

Disambiguation very simply means - removing any ambiguities and confusions to make something very clear. In the context of NLP/NLU, it can again refer to ambiguities of various kinds - 

  • Query-based AmbiguitiesWhere the system is not very clear in determining the intent from a user query or is confused between many related intents
  • Word-sense AmbiguitiesAnother common area where ambiguities arise is with respect to word-sense. The same word can mean different things in different contexts which the system may not clearly identify.

Source - Pixabay

For the purpose of our discussion, we would limit ourselves to Query-based ambiguities and how to handle them.

In an ideal scenario, the training phrases for the intents should not have any commonalities but that seldom is the case. In most cases, there are scenarios where the same user query may end up resolving to two different intents with similar levels of confidence. 

Another scenario is where the intent may still resolve to a single intent with high levels of confidence but there may be ambiguity in recognizing the entities. This especially can happen if there are a number of custom entities involved in the training setup.

How to determine ambiguity?

The primary method of determining whether there is ambiguity is studying the response of the NLU layer, especially the confidence scores if there are multiple matches detected (Note: Different NLU systems may have different behaviour in terms of determining the confidence scores and sending the responses).

If the successive matches have similar confidence scores then we can conclude that there is an ambiguity in the resolution and we should disambiguate further (We shall come to the how to do that in the later section).

When we say similar confidence scores, it becomes very subjective. What’s useful is taking a percentage as a guideline and using that to determine. Again, this may vary based on the system as well as the training dataset. At Engati, we use a default value of 15% as the guideline that again can be customized at an individual bot level.

How to disambiguate?

Now that we have established that there is an ambiguity, the next obvious question is to figure out how do we disambiguate and allow the chatbot users to get to what they intended to ask in the quickest manner possible.

The recommended approach would be to present the ambiguous options to the end-user so that they can pick the most relevant one and proceed.
There should be a leading question before presenting the options that could be worded as - “Did you mean?” or “Pick one of these options that is the closest to your query” or you can make it informal as well.

Coming back to the options, the text should be picked up from the trained utterances or variations. Another challenge that comes up is that some of the training utterances may not be fully-formed sentences and may look odd when presented to the user so a proper one needs to be picked. 

Also, when entities are used in a training utterance, they should be replaced with meaningful representative samples instead of variable names. There could be a simple logic to picking one of the custom entity values or have sample values pre-defined for the system entity types.

A “none of the above” kind of option can also be appended at the end so as to have a clear path in case the resolution is incorrect.

How does Engati handle disambiguation? 

When you use Engati, you can use the auto-disambiguation feature by turning on the “Show related matches” option from the configuration. All the steps that were outlined above are taken care by the system automatically, including determination of the ambiguity as well as handling the nuances in presenting options.

There is also an inbuilt well-formed sentence determination step that is used to rank the training utterances. The most well-formed sentence from the intent training set is taken into account and presented as an option. This also takes care of the entities involved and picks a similar training utterance.

Given the wide reach of Engati in terms of the supported channels, there are restrictions in terms of showing options. For example - Messenger and Whatsapp channels only allow 20 characters as the option text. That is again automatically handled by presenting the full text in the message itself and presenting the options as sequential numbers corresponding to the items.

What happens next?

The final step is the most important and completes the cycle of disambiguation. You need to track which option the user is selecting from the disambiguation options.
The captured information should be used to re-train the system and close the feedback loop. When done right, this would result in lesser ambiguity and significantly better responses over time.

Let us know your thoughts on the above and what are your strategies in disambiguating chatbot queries.


Kinshuk Kar

Kinshuk Kar is the Senior Director of Product Management at Engati, a platform to help leapfrog your customer engagement story with leading-edge technology.
He's passionate about all things tech and its potential to revolutionize how we live.

Andy is the Co-Founder and CIO of SwissCognitive - The Global AI Hub. He’s also the President of the Swiss IT Leadership Forum.

Andy is a digital enterprise leader and is transforming business strategies keeping the best interests of shareholders, customers, and employees in mind.

Follow him for your daily dose of AI news and thoughts on using AI to improve your business.

Catch our interview with Andy on AI in daily life

Continue Reading