The reason teams pick us
Why our bot doesn't make things up.
Every answer points to a real page on your docs. Every claim is checked before it ships. And when the answer isn't in your content, the bot says so — instead of guessing.
Every answer comes with a receipt.
When the bot replies to a customer, every factual claim links back to a real page on your docs — not a "we drew on training data" hand-wave, but the literal page that contained the answer.
In the widget, each cited source shows up as a small cyan chip the reader can click. In your QA dashboard, every conversation is auditable — you can hover any sentence and see which page on your help center it came from. Your support team can stand behind every word the bot said. Your QA team can spot-check the conversations that actually matter, instead of reading every transcript.
This is the single biggest difference between FlowChat and every other chatbot. Most bots are trained to sound confident. Ours is built to be checkable.
We double-check before the customer reads it.
Even with citations on every claim, models occasionally cite a page that doesn’t actually support the claim — the right neighbourhood, but not the right answer. It’s how "confidently wrong" creeps in, even on bots that have been told to cite sources.
So after the bot writes its reply, we run a second pass: each cited claim is checked against the page it cites. If the page doesn’t actually back up the claim, the claim gets removed from the answer before your customer ever sees it. We’d rather your bot say "here’s what I do know" than ship a sentence that doesn’t hold up.
Refusal is a feature, not a fallback.
Every other chatbot product fights its model when the docs don’t cover a question: prompt engineering, escalation flows, system prompts begging the model to say "I don’t know" — and most of the time the model says something anyway, because that’s what models do.
We took the opposite stance. When the bot can’t find a confident answer in your docs, it says so — in plain English, with a list of the closest related pages it did find. Refusal is rendered in a distinct refusal-purple, so it’s visually different from a "the bot is broken" error. Your customer learns something they couldn’t learn from a fluent guess: this answer isn’t here, look in those three places.
71% of visitors who hit the refusal screen click one of the "closest related" links and find their answer in your docs. They don’t open a ticket. The "0 hallucinations escaped to users this week" line on our homepage is earned by this gate, every day.
- Find sources
- Write answer
- Check it's grounded
- Send the reply
Top 3 pages from your docs
Every factual claim in the bot's reply links back to a real page on your docs — your support team can stand behind every word…
After the bot writes its reply, each cited claim is checked against the page it cites — if the page doesn't actually back it up, the claim is removed before your customer sees it.
When reranker top-3 score below threshold, the generation phase doesn't run at all…
What the bot replies
Three pre-canned examples. Try a question that’s in our docs ("how do you handle hallucinations?"), one that’s borderline ("does FlowChat support multilingual answers?"), and one that’s out-of-corpus ("what’s your refund policy for tax filings dated before 2024?"). Watch how the bot handles each.
Numbers we pin our reputation to.
- 98% of answers ship with a clickable source citation
- 71% click-through on the refusal screen’s "closest related" links
- 0 made-up answers escaped to users in the last 90 days