FP&A
Variance Commentary
FP&A team writing budget-vs-actual commentary from a blank page — every month, for recurring patterns. The close model is structured data. The output format is defined. The problem fits the tool.
The situation
At month-end close, the FP&A team produced budget-vs-actual variance commentary for every line item that exceeded a threshold: revenue, cost of goods, operating expense, headcount. Two analysts spent a combined nine hours per month writing these narratives from a blank page — pulling numbers from the close model, formulating explanations, formatting for the Finance Director's review.
The Finance Director revised 40% of what came to her. Not because the analysts were wrong — because the commentary was inconsistent. One analyst wrote concisely; the other narrated. The same revenue variance might be described three different ways in three different months. The close cycle was five to seven days. The target was three.
The problem is structurally suited to automation. The input is a spreadsheet with known columns. The threshold logic is deterministic. The output format is defined. The patterns recur monthly. An analyst writing "headcount expense above budget due to the new hire in April" is doing work a machine can do — and doing it nine hours per month.
The approach
The tool — a Python script — reads the close model Excel file, identifies every line item above the variance threshold, pulls the relevant context fields the analyst has staged, and generates a commentary draft for each item via the Claude API. Output lands in a structured review file: each line item gets a draft, the analyst marks it Approved, Edited, or Rewrite, and the Finance Director reviews the approved set.
One adjustment changed the output quality more than any prompt change: configurable thresholds by department type. A $10,000 variance is material in a small cost center and noise in a large revenue line. The Finance Director specified different floors for each category — revenue, operating expense, headcount — and the tool applied them. The result was that the items surfaced were the items worth writing about.
Context notes mattered here too. When an analyst staged a brief explanation of what drove a variance — "Q2 price increase per contract amendment" — before the generation run, the commentary that came back required revision 6% of the time. Without context notes, the revision rate was 18%. The tool is only as good as the context it receives.
The result
After three live close cycles, analyst time on variance commentary was 1.8 hours per month — down from nine. The Finance Director's revision rate dropped from 40% to 22%. Zero variances were missed across all three cycles. The close cycle reached 2.5 days on average, well below the three-day target.
The analyst who had been most skeptical — who had asked whether the commentary would sound like the work of someone who understood the business — submitted three of the six iteration log improvements during the live phase. Her assessment after three months: the tool was more consistent than what the team had been producing manually, and the Finance Director's review was faster because the baseline was higher.
The Finance Director's sign-off: "It's better on average — it's more consistent, even if some items still need edits." Consistency turned out to be as valuable as speed. When variance commentary reads the same way every month, the Finance Director spends less time on formatting and more time on the variances that matter.