Reducing Token Waste in Documentation Workflows
The problem
A consulting agency used LLMs to generate internal architecture documentation from source code. The prompts were written ad-hoc by engineers and produced long, verbose responses that required follow-up prompts to refine.
This increased API usage and slowed down documentation generation.
The solution
promptctl was used to generate structured prompts automatically.
- Deterministic prompt structuring engine
- Enforced clear sections, constraints, and output formats
- Cost estimation across models before sending requests
The result
63%
Cost reduction
$950
Monthly savings
2.0x
Faster documentation
"Structured prompts reduced the number of follow-up calls dramatically."
— Lead engineer, software consultancy