Case Study · SaaS / Outbound-Led Growth
The Emails Were Fine. The Research Was the Problem.
A SaaS founder was writing clean, well-structured cold emails with decent subject lines. Reply rate was stuck at 1.1%. The fix wasn't better copy, it was understanding the prospect before writing a single word.
The Situation
A solo founder was doing all of their own outbound. They had a targeted list of ~400 prospects the right job titles, right company sizes, right industries. They'd invested time in the copy. Subject lines were tested. The emails weren't spammy. They followed all the advice.
Reply rate: 1.1%.
The replies that did come back were mostly objections: "not the right time," "we already have something," "send me more info" with no follow-up intent. Positive replies were rare enough that they felt like luck rather than a system working.
The standard advice is to fix the copy. Test different subject lines. Try a new angle. So that's what they did for two months. The needle barely moved.
The Real Problem
The copy wasn't the issue. The issue was that every email started from the same place: a contact record with a name, job title, and company.
That's not enough context to write a relevant email. It's enough to write a personalized-looking email, one that uses the right variables, references the right pain points, sounds like it was written for this person. But it wasn't, really. It was written for a persona. The prospect could tell.
Cold email works when the email reflects something real about the person's current situation something happening at their company right now that makes your outreach timely. A recent hire. A funding round. A job listing that signals a specific pain. A product launch that creates a new problem.
None of that was in the contact record. And nobody was going to research 400 prospects by hand.
What Changed With SendState
Research before writing
When prospects were imported into SendState, the research agent ran on each contact before a single email was generated. It scanned the prospect's website, open job listings, recent news mentions, and company activity, building a brief that included a fit score, intent score, and a set of "why now" signals specific to that person.
For a prospect at a SaaS company posting three engineering roles, the research surfaced a growth signal. For a company that had just changed their pricing page, it flagged a revenue pressure indicator. For a founder who'd posted on LinkedIn about scaling their sales team, it noted the timing.
Each contact got three personalized openers generated from those signals, specific enough that they referenced real, current things about the prospect's business. Not "I noticed you're in the SaaS space." More like "You're scaling the engineering team while the sales motion is still founder-led, that's a specific timing problem."
Campaign Advisor detecting what wasn't working
Once the campaign launched, the Campaign Advisor tracked reply patterns across three opening variants. Within 38 sends, it detected that the "time saving" angle was generating a disproportionate share of objections. It blocked that angle automatically and shifted volume to the two variants that were performing.
By step 2, the CTA that had been prompting the most "not right now" responses was softened automatically changing from a direct meeting request to a lower-friction question.
The compounding effect
Better research meant better open rates and higher initial engagement. The Campaign Advisor's real-time adjustments meant the sequence kept improving rather than degrading over time. Reply rate moved from 1.1% to 5.4% in the first campaign cycle, and to 6.2% once angle optimization had fully calibrated.
The Results
- Reply rate: 1.1% → 6.2% over two campaign cycles
- Positive reply share increased: objections dropped from ~60% of replies to ~22%
- The founder stopped manually rewriting follow-ups entirely, the Campaign Advisor handled adjustments
- Time spent on outbound dropped while results improved, research automation removed the most labour-intensive part of the process
Why It Worked
The emails improved because the information behind them improved. Research that previously would have taken 20-30 minutes per prospect (if it happened at all), now ran automatically on every contact before outreach started.
The Campaign Advisor compounded that by making sure the angles that worked got more volume and the ones that didn't got cut, while the campaign was still running. No post-mortem analysis. No relaunching from scratch.
The two things together (better input before sending, smarter adaptation during sending) are what moved the number. Copy was never the bottleneck.
"I'd been testing subject lines for two months. The actual problem was that I didn't know anything real about the people I was emailing. The research changed that immediately."
SaaS founder, B2B outbound
