Why the term matters
"AI-native" is a marketing label, and like every marketing label it gets stretched. Every helpdesk vendor in 2026 says they are AI-native because every helpdesk vendor in 2026 has shipped some kind of AI feature. The label has lost meaning at the surface. What still has meaning is the underlying architectural and commercial choice the vendor made.
I want to make the distinction concrete because, when we built KimonDesk, the choice we made was a commercial one as much as a technical one. AI features are core in our product not because we are clever, but because we made an early decision that you should not have to pay extra for the part of the product that does the heaviest lifting on a busy week.
This post walks through what the term actually means, the five things AI does in customer support today, and the buyer's checklist for telling apart genuine AI-native vendors from AI-enabled vendors that have learned the right marketing copy.
The architecture distinction
The honest version of "AI-native" has a specific technical signature. AI inference happens at the same layer as the rest of the product, not in a separate service that the rest of the product asks for permission. Drafts appear inline in the agent's reply box, not in a side panel that requires a click. Auto-resolution runs as part of the inbound ticket pipeline, not as a separately licensed automation. The model can be swapped (Claude or GPT, perhaps a fine-tuned in-house model later) without touching the agent UI, because the UI was designed for AI from the start.
The bolt-on alternative looks different in the codebase. The original product was built for human-only handling. AI was retrofitted as a separate module, often by acquiring a smaller AI vendor or shipping a partner integration. The agent UI shows AI suggestions in a panel that sits next to the reply box. The auto-resolver lives in a separate workflow module and has to be configured to hand tickets back to the human inbox if it gives up.
Neither approach is technically wrong. Bolt-on AI can be excellent. The reason the distinction matters is that bolt-on AI is almost always priced as a separate SKU, which translates into a separate line item on your renewal quote.
The five things AI does in customer support
To make the architectural distinction tangible, here are the five concrete AI capabilities that matter today, in priority order for an SME team.
1. Reply drafts
Inbound ticket arrives. The system reads the ticket, the customer history, your knowledge base, and produces a draft reply for the agent to edit. Good drafts get accepted with one or two small changes. The lift is roughly 30 to 50 percent off median handle time once agents trust it.
This is the single biggest productivity win and the easiest to evaluate. If a vendor's draft quality is poor on your specific tickets, no amount of marketing copy fixes it. Test it during the trial.
2. Ticket summarisation
Long threads (10 plus messages, multiple channel switches, escalations) get a one-paragraph summary at the top. The summary updates as the thread grows. The benefit is for the second agent who picks up the thread on day three. Without summarisation they spend five minutes reading. With it, they spend 30 seconds.
3. Sentiment detection
The system flags tickets where the customer is frustrated, abusive, or at risk of churning. The flag routes the ticket to a team lead or surfaces it on a dashboard. Good sentiment detection catches escalations before they become public Trustpilot reviews. Weak sentiment detection produces a wall of false positives that the team learns to ignore.
4. Auto-resolution
For high-confidence matches against your knowledge base, the system replies to the customer directly. Good ones close 30 to 70 percent of repetitive tickets (refund-status checks, password resets, shipping queries) without an agent ever seeing the ticket. We covered the use cases and the failure modes in How AI Auto-Resolution Works.
5. Intent and routing
Inbound ticket gets classified by intent (billing, technical, sales, escalation) and routed to the right agent or queue. The benefit compounds with team size. At 8 agents the time saved is meaningful. At 2 agents it is rounding error.
The 2026 commercial state of play
Here is what the major vendors charge for these features as of Q2 2026, at list price.
| Vendor | AI add-on cost | Tier gate |
|---|---|---|
| Zendesk | $50/agent/mo (Advanced AI) | Suite Professional or higher |
| Freshdesk | $35/agent/mo (Freddy AI Pro) | Pro or higher |
| Intercom | $0.99 per resolution + base | All tiers, usage-priced |
| HubSpot Service Hub | included in Pro and above | Pro tier ($90/seat/mo) |
| Salesforce Service Cloud | $50/agent/mo (Einstein AI) | Service Pro or higher |
| KimonDesk | included at every tier | from Free upwards |
The Intercom model is interesting because it is the only one that does not look like a flat add-on. It charges per resolved conversation. For a small team, the maths works out comparable to a $30 to $50 per agent flat fee. For a larger team it can run higher.
The KimonDesk row is the architectural commitment. We do not charge for AI separately because we built the product around it. The additional cost to us per ticket of running inference is real but small, and we absorbed it into the flat tier price.
The buyer's checklist
If you are evaluating a helpdesk and the vendor calls itself AI-native, six questions cut through the marketing fog.
1. Is AI included in the cheapest paid tier? If yes, the vendor is committed to AI as a default. If AI requires upgrading from $69/agent/mo to $115/agent/mo plus a separate $50/agent/mo add-on, the vendor sees AI as a price-discrimination lever, not a default.
2. Can I swap the underlying model? Vendors that built AI in late often hard-coded one provider (typically OpenAI). Vendors that built AI in early designed model-swap capability from the start. The swap matters for two reasons: regulatory (some industries cannot use US-hosted models) and quality (Claude consistently outperforms GPT-4o on customer-support tone in our own testing).
3. Are AI features wired into the agent's primary workflow? Drafts in the reply box, summary at the top of the thread, sentiment flags inline. Or are they tucked behind a side panel and a click? The wiring tells you whether AI was a foundation choice or a retrofit.
4. What happens when the AI is wrong? Good AI integrations show their reasoning, let the agent edit before sending, and learn from the edit. Bad ones present the AI output as a fait accompli and offer "approve / reject" only. The latter does not improve over time.
5. How is AI usage priced beyond the seat licence? Per resolution? Per draft? Per token? Or simply included? Per-token pricing is honest but unpredictable. Per-resolution pricing is predictable but creates strange incentives (you pay every time the system tries to close a ticket, success or failure, on some vendors). Included is the simplest.
6. Where does the inference happen? EU, US, model provider's region of choice? For data residency this matters. For latency it matters less than vendors imply, given inference times are now sub-second on modern models.
Where AI-native does not yet mean better
I want to be careful not to overstate the case. AI-native is a commercial and architectural commitment, not a quality guarantee. There are bolt-on AI implementations that outperform AI-native ones because the bolt-on vendor invested heavily in model fine-tuning. There are AI-native vendors whose models are mediocre because they prioritised breadth over depth.
The right way to evaluate AI quality is the trial. Pipe a week of your real tickets into the candidate helpdesk, configure the AI features, and watch your team use it. Three things to measure:
- Draft acceptance rate: what fraction of AI drafts does the agent send with zero or minimal edits?
- Auto-resolution accuracy: when the system closes a ticket, what fraction of customers come back with a follow-up indicating the resolution was wrong?
- Sentiment flag precision: what fraction of "high-frustration" flags are actually high-frustration tickets?
If the numbers come out comparable across two vendors, the AI-native one wins on commercial grounds (no add-on, no tier gate, no surprise renewal increase). If the bolt-on AI is materially better on your tickets, pay the premium and revisit at the next renewal.
What KimonDesk does
For full transparency about our own product:
- Reply drafts, summarisation, sentiment, auto-resolution and intent routing are all included from the Free tier upwards.
- You can switch between Claude and GPT in the model picker. We default to Claude because it produces better support tone in British English.
- AI usage is metered per organisation but not billed per request. The flat tier price covers it.
- Inference happens in the EU by default; you can opt into US inference if you need lower latency for US-hosted customer bases.
- The AI is wired into the agent's primary workflow. Drafts appear inline. Summaries sit above the thread. There is no "AI panel" to click into.
We made these choices because the alternative (charging extra) felt like a regression to the model that the rest of the legacy helpdesk industry built. The whole point of starting fresh was to not repeat the same mistakes that made an 8-agent team's renewal so painful in the first place.
If you want the full pricing breakdown against the legacy vendors, the Zendesk TCO post walks through the numbers. The comparison hub gives the side-by-side capability table.
The closing word
AI-native is not magic. It is a deliberate commercial and architectural choice that the vendor either made or did not make. When you read the term in marketing copy, ask the six questions above. The answers will tell you whether you are looking at a vendor who built for 2026 or a vendor who is hoping you will not notice the add-on line item on next year's renewal quote.
For our own pricing, the main pricing page shows everything that is included at every tier, with no hidden footnotes.
References
- Zendesk Advanced AI pricing page, Q2 2026.
- Freshworks Freddy AI pricing page, Q2 2026.
- Intercom AI pricing page (per resolution model), Q2 2026.
- HubSpot Service Hub pricing page, Q2 2026.
- Salesforce Service Cloud Einstein pricing page, Q2 2026.