For VP Customer Experience & Compliance

The AI customer agent your General Counsel will actually sign off on.

Every answer is traceable to a specific policy document. When the answer isn’t in your approved content, the bot says so — instead of guessing. SOC 2 Type II, full audit trail per conversation, behaviour controllable by your knowledge governance team. Take 40% of contact volume out without a board-level brand incident. From $2,500/mo.

If any of this is on your desk this quarter

The stakes nobody else seems to take seriously.

Your last AI pilot tanked CSAT or invented policy.

You ran the bake-off. It deflected on paper. Then CSAT on AI-only conversations came in below human, or QA caught the bot quoting a discount you don't offer, or — worst case — General Counsel forwarded a screenshot. The pilot ended quietly. The board still wants the automation number.

Your General Counsel can stop a deployment with one email.

Anything that could 'invent' an answer is a regulatory liability. Anything that can't show its sources is an audit finding waiting to happen. Anything procurement can't trace to a SOC 2 Type II, a signed DPA, and a documented sub-processor list isn't getting through the gate.

You're reporting an automation number to the board next quarter.

A specific number, with a specific methodology, that holds up to a CFO's questioning. Not 'about half', not 'it's helping' — a containment-with-quality figure your internal audit team will sign off on, and a cost-per-resolution trajectory that's defensible six months from now.

What changes structurally

Built for the audit, not just the demo.

Every answer is traceable to a policy document.

Not 'the model probably learned this somewhere.' Every reply links to a specific page in your knowledge governance system — the one your compliance team approved as the policy of record. When audit asks 'why did the bot say this,' you click a link and show them.

The bot refuses when the answer isn't in approved content.

When your published content doesn't cover the question, the bot doesn't reach. It refuses, with the closest related approved sources, and routes the conversation to your human queue. Refusal is a feature your General Counsel will actually thank you for.

Your knowledge governance team controls behaviour.

Per-source permission scopes, content versioning, change approvals, audit trails on who edited what and when. Roll out to one product line, one customer segment, or one ticket category — phased exactly the way your existing policy-rollout process works.

SOC 2 Type II, ISO 27001, and a signed DPA before procurement asks.

SOC 2 Type II report under NDA, ISO 27001 in progress, executed DPA with SCCs available on request, sub-processor list with 30-day notice on changes, data residency options for EU and AU. The full security pack is on /security and procurement-ready.

Numbers that survive a CFO's questioning

Containment, with quality, and zero brand-safety incidents.

  • 38% median contact volume reduction in pilot weeks 4–12, with CSAT held or improved
  • 0 hallucinated policy answers in the last 90 days every shipped claim was traceable to an approved source
  • 0 compliance findings on the deployed pipeline across pilot tenants, since first deploy

Pilot data, methodology documented in our 30-day shadow-mode pilot report. The numbers are conservative on purpose — we’d rather under-promise on the deck than over-promise to the board.

What procurement asks for

The pack your security and legal teams need.

SOC 2 Type II report Available under NDA. Type I in audit observation, Type II in progress.
ISO 27001 In progress; scope statement available.
Signed DPA + SCCs GDPR Article 28 controller-to-processor terms. DocuSign turnaround under one business day.
Sub-processor list Maintained in DPA appendix. 30-day advance notice on any change.
SAML 2.0 / OIDC + SCIM Okta, Azure AD, Google Workspace, Auth0, custom IdPs. SCIM 2.0 for user lifecycle.
Data residency EU and AU options on Enterprise. Data localization under contract.
Audit log Immutable per-conversation log with up-to-7-year retention. Logpush to your own bucket on request.
Model governance Prompt versioning, refusal-threshold controls, behaviour A/B testing inside knowledge governance.

Request the full security pack on /contact or email enterprise@flowchat.com. Sent under NDA within one business day.

Questions every VP CX is going to ask

Honest answers to the things that killed the last vendor.

What's the worst-case 12-month total cost of ownership?

Per-conversation pricing with a contractual cap is on the table for Enterprise contracts — your CFO will not get a surprise. The base subscription is from $2,500/mo with usage-priced overage, and we'll write a 12-month TCO ceiling into the MSA. Bring your highest-projection volume; we'll quote it.

How does this not become the next 'Klarna replaced human agents and rolled it back'?

Three structural reasons. First, our deflection metric is paired with CSAT — if the AI conversation has worse CSAT than human baseline, the bot routes faster, not harder. Second, the bot refuses when uncertain instead of guessing — the 'confidently wrong' failure mode that broke Klarna's rollout doesn't fire here. Third, every answer is traceable, so the QA process catches drift in days, not quarters. We're priced to be a layer in front of your humans, not a replacement.

Will it integrate with our Salesforce Service Cloud / Genesys / ServiceNow / Sprinklr?

Yes. Enterprise contracts include native integration with your platform of record (Salesforce Service Cloud, Genesys Cloud CX, ServiceNow Customer Service Management, Sprinklr Service). Conversations sync to existing case objects, agent handoff carries full context, and your CRM remains the system of record. Implementation is two to four weeks with a dedicated engineer, not six months.

What does the bot do on the 200 hardest tickets we have today?

We'll run a paid pilot. You ship 200 representative tickets — your hardest, most-regulated, highest-stakes. We run them through FlowChat in shadow mode against your live knowledge governance system. We deliver a labelled report: which would be answered, which would be refused (and why), which would be wrongly refused, and which would have been confidently wrong (and why). 30 days, $5k, refunded against the first year if you sign.

What happens to model risk and the EU AI Act?

We don't fine-tune models on your data — your content is retrieved at inference time, not learned. That keeps you out of the 'high-risk AI system' classification for content-grounded customer service. We provide the model risk documentation your compliance team needs (architecture diagram, data flow, retention policy, retraining policy, incident response). Documented and pre-cleared with EU AI Act Article 52 transparency requirements.

Can you support our knowledge governance change-control process?

Yes. The bot's behaviour follows your approved content — when your governance team publishes a change, the bot reflects it on the next crawl (configurable from real-time to weekly). You can also pin a specific content version per region or per customer segment, so a policy change rolls out the same way your existing policy changes do — staged, audited, reversible.

A 30-day pilot. Your 200 hardest tickets.

We’ll run shadow mode against your hardest, most-regulated, highest-stakes tickets for 30 days. You get a labelled report: what would have answered, what would have refused, what would have been wrongly refused, what would have been confidently wrong. $5k, refunded against the first year if you sign. No procurement pain until you say yes.