Finding the right answer

Finding the right answer, even when the customer phrases it differently.

Most chatbots are good at one type of search and bad at the other. We do both — at the same time — so paraphrases, SKUs, error codes, and casual questions all land on the right page on your docs.

Customers never use the words your docs use.

Your docs page is titled "Ending your subscription." Your customer types "how do I cancel?" A naive chatbot misses the page entirely — the words don’t match. Your customer hears "I’m sorry, I don’t have information about that."

Or the reverse: a customer types "what does error A4B22 mean?" and a meaning-based chatbot treats A4B22 as random noise, returns three pages of generic prose about errors, and your customer never finds the page that’s literally titled "Error A4B22 — payment declined."

We run both kinds of search at the same time — one that catches paraphrases, one that catches exact words — and combine them into one ranked list. Whichever way the customer phrases the question, the right page surfaces first.

Then we re-rank with the question in mind.

First-pass search returns a top 50 candidates in milliseconds. Then a second, slower pass re-reads each candidate alongside the customer’s question and decides which three are actually the best fit. The bot writes its answer from those three.

This second pass is where most chatbot platforms cut corners — it’s slow and expensive. Skipping it costs about 30% of your possible answer accuracy. We don’t skip it.

When the search finds nothing, we say so.

If the second-pass scores are all low, that’s the bot’s cue: your docs don’t cover this question. Instead of writing a confident-but-wrong answer, the bot refuses — politely — and shows the closest related pages it did find. Your customer gets a real next step, not a fluent guess.

See it on three different question styles.

Three preset questions, three different search behaviours. Each shows what the two search columns find on their own and what the final ranking picks — with the actually-correct page marked.

Slide right to favour pages that both columns found. Slide left to give more weight to whichever column ranks the page highest. The default works for most sites — this is here so you can see how the bot decides which page wins.

Exact word matchWins when the question quotes a SKU, error code, or specific term.
  1. 1Annual billing terms
  2. 2Account deletion
Matches by meaningWins when the customer paraphrases.
  1. 1Ending your plan
  2. 2Refund policy
  3. 3Switching plans
  4. 4Annual billing terms
  5. 5Account deletion
Final answer pagesThe 3 pages the bot actually answers from.
  1. 1Ending your plan
  2. 2Refund policy
  3. 3Switching plans
What this query teaches

The customer asked about ‘cancel subscription’. Your docs page is titled ‘ending your plan’. The exact-word search misses it entirely — no matching words. The meaning-match column finds it. The final answer column picks it as the top result. This is the most common chatbot failure mode — customers don’t use the same words your docs do.

Move the slider to see how the two columns combine. The default is where most production sites land — you can change it any time from your admin if your traffic looks different.