arrow_back Back to blog edit_note Notes — founder pivot

Why we changed our mind on autonomous AI support

Three days ago we published an essay arguing autonomous AI support was a coin flip with infinite downside. Today we're flipping the position. The honest math behind the pivot.

Julien Romanetto · April 29, 2026 · 6 min read

The post we just deleted.

On April 27 we published "Why our AI never sends on its own." The argument was clean: out of every 100 tickets, two will fail in a way that destroys the customer relationship, and one bad reply that signs your name costs more than 98 perfect ones save. Therefore, drafts only. Human approves. No auto-send, ever.

We took it down today. Here's why.

What was right about that post.

For early-2024-era models, the argument was correct. We tested vendor X and vendor Y in production for six months and watched them confidently invent order numbers, apply outdated policies, and apologize for things that didn't happen. The customer-experience cost was real. The "save 30 seconds, lose a customer" framing was real. Drafts-with-human-validation was the right defensive bet.

The reasoning was: if catastrophic-failure rate × customer-LTV exceeds time-saved × hourly-cost, autonomous AI is a net loss. We ran the math at 0.05% catastrophic failure and showed the line.

What changed in two days.

Two things. One technical, one strategic.

1. The error rate is no longer the dominant cost. When we re-ran the comparison this week — 1,000 tickets, our newest model class versus a tired night-shift agent on the same dataset — the AI's catastrophic-failure rate sits below the human's. We weren't expecting that. The order-number-confabulation problem is mostly solved at the application layer (we ground the model in actual order data, not just the message text). The wrong-policy problem is mostly solved by the playbook layer. The tone-deaf-in-a-complaint problem is mostly solved by sentiment-aware response selection. None of those existed at the maturity we needed even six months ago.

2. The opportunity-cost framing was wrong. We were measuring the cost of making a mistake. We weren't measuring the cost of doing nothing interesting. Reply 100% accurately to "where is my order" 500 times a week and you've still produced zero business value. You've defended the relationship; you haven't grown it. The bigger error was answering safely instead of answering profitably.

What that unlocks.

Once you accept that the AI can reliably handle the volume, the question stops being "will it screw up?" and becomes "what should it do besides reply?" That's where the unlock is:

Each of these is something a tired human at midnight would not, and could not, do consistently. The AI does it on every single ticket, in the same breath as the reply.

What about the 0.05%?

Still real. Our answer is no longer "human-in-the-loop on every reply." It's "surgical escalation."

You define the boundaries: amounts above $200, sensitive keywords, premium customer tags, anything ambiguous. Those escalate to a human. Everything else ships, 24/7. The escalation rules are a few minutes of setup, and you adjust them as you see edge cases. The catastrophic-failure surface goes from "every reply" to "only the replies you didn't trust the AI with in the first place" — and that's a tiny fraction of volume.

On the math: at 0.02% catastrophic-failure rate (where we are now), the math flips. Recovered revenue and cross-sell from autonomous handling outpaces the residual customer-loss cost by a wide margin. The drafts-only model leaves all of that on the table.

What this means for the product.

Three concrete changes shipping over the next two weeks:

What we owe you.

If you signed up in the last six weeks because you read the manifesto's "no auto-send, ever" promise: that line is gone, and we should tell you straight. You can keep drafts-only mode forever. We will never flip the auto-send setting on you. And the data-privacy commitments — no model training on your data, no retention beyond your window, no human access on our side — are not changing. Those were never the part we were uncertain about.

What we were uncertain about was whether autonomous AI was a good product or a marketing fantasy. Two days ago we said fantasy. This week the production numbers said product. We're updating.

If this changes your view of us — for or against — tell me directly.

JR

Julien Romanetto

Founder, SupportPilot AI. Strong opinions, weakly held.

Related: The new SupportPilot manifesto · More notes

Turn your support inbox into a revenue channel.

Try SupportPilot Free