Stop wasting money
on bad prompts
Structures every prompt for max signal, min waste. Claude, GPT-5, Groq, DeepSeek — save 55–71% on API costs.
brew install --cask oleg-koval/tap/promptctl
Copied!
Bad prompts are expensive
Unstructured prompts waste tokens on rambling, require follow-ups, and produce output you can't use. That's real money.
More expensive without structure
Vague prompts produce unfocused responses. You end up sending 2-3 follow-ups to get what you actually need. Each call costs tokens. Each rework wastes your time.
avg. $0.048 vs $0.016 per callSavings on every prompt
Structured prompts with personas, constraints, and output formats get a focused response in one shot. Less output tokens (no rambling), fewer calls (no rework), better results.
verified across 10 modelsModels, one command
Switch between Claude Sonnet 4.5, GPT-5.1, GPT-5.2, Llama 4 Maverick, DeepSeek R1 & V3, and more. See cost comparison across all models before you send. Pick the cheapest model that fits your task.
promptctl cost --compareDependencies
Single binary. No Node.js, no Python, no Docker. Install via Homebrew, run immediately. Works on macOS (Intel + Apple Silicon), Linux, and Windows.
go build = doneFrom rough idea to optimized prompt in milliseconds
No LLM call needed. The engine is deterministic and rule-based - it applies prompting best practices at the speed of your disk.
You type intent
Natural language, messy, incomplete - exactly how you'd explain it to a colleague. "analyze my idea about X, be critical"
Engine classifies
Detects task type from 11 categories. Assigns expert persona, output structure, and constraints automatically.
Prompt structured
XML tags, decomposed tasks, implied needs, tone-matched constraints. Ready to send or save as a reusable template.
Send or pipe
Send directly to any LLM with cost tracking, pipe to Claude CLI, or copy to clipboard. You choose the workflow.
Everything you need, nothing you don't
Built for developers who use LLMs daily and care about what they spend.
promptctl create
Raw intent to structured prompt. Auto-detects business analysis, code review, debugging, architecture, and 7 more task types.
Cost estimation
See exactly what a prompt will cost before sending. Compare across all 10 models. Every call shows savings vs unstructured.
Quality score
Run promptctl create "..." --score for a 0–100 score and hints (fidelity, structure, duplicate sections).
Template library
YAML templates with variables, conditionals, auto file reading. Global library + project-level overrides. 5 starters included.
Interactive setup
4-step wizard: pick provider, pick model, paste API key (browser opens for you), done. Switch models anytime with one command.
Pipe anywhere
stdout output. Pipe to Claude CLI, OpenAI CLI, clipboard, files, or other tools. No lock-in. Fits your existing workflow.
Secure by default
API keys stored locally with 0600 permissions. Supports env vars for CI/CD. Nothing leaves your machine without your consent.
Pipe to any LLM tool or agent
promptctl outputs to stdout. Combine it with any CLI tool, agent framework, or automation pipeline.
Claude CLI
$ promptctl review --file=auth.ts | claude $ promptctl create "plan k8s migration" | claude
Pipe structured prompts directly into Anthropic's Claude CLI for immediate execution.
OpenAI CLI
$ promptctl create "optimize SQL queries" | openai chat $ promptctl debug --file=api.ts --error="timeout" | openai chat
Works with any OpenAI-compatible CLI. Prompt stays structured regardless of model.
Direct send with cost tracking
$ promptctl send review --file=main.go --model=gpt-5 # Sending to GPT-5... # Est. cost: $0.0125 (saves ~$0.025 vs unstructured) # ... response ... # Cost: $0.0118 | Tokens: 420 in / 1,240 out | 2.3s
Built-in execution with real-time cost reporting after every call.
Quality score
$ promptctl create "plan k8s migration" --score # ... structured prompt ... # Quality score: 87/100
Use --score to get a 0–100 quality score and hints for improving the prompt (fidelity, structure, duplicate sections).
Shell automation & CI/CD
$ promptctl review --file=src/*.ts | tee review.md $ promptctl cp commit --changes="$(git diff --staged)" $ promptctl cost review --file=main.go --compare
Compose with standard Unix tools. Compare cost across all models with --compare. Automate code reviews in CI, generate commit messages from diffs.
Try it now — no install needed
Type your intent, get a structured prompt back. See what promptctl does before you install.
Know exactly what you spend
Run promptctl cost --compare
to see this for your actual prompts.
| Model | Structured cost | Without promptctl | You save |
|---|---|---|---|
| Claude Sonnet 4.5 | $0.0197 | $0.0550 | 64% ($0.035) |
| Claude Haiku 4.5 | $0.0066 | $0.0229 | 71% ($0.016) |
| Claude Opus 4.6 | $0.0328 | $0.0721 | 55% ($0.039) |
| GPT-5.1 | $0.0152 | $0.0533 | 71% ($0.038) |
| GPT-5.2 | $0.0178 | $0.0622 | 71% ($0.044) |
| GPT-5 | $0.0127 | $0.0444 | 71% ($0.032) |
| Llama 4 Maverick (Groq) | $0.0008 | $0.0029 | 71% ($0.002) |
| DeepSeek Chat (latest) | $0.0007 | $0.0023 | 71% ($0.002) |
| Gemini 2.5 Pro | $0.0127 | $0.0353 | 64% ($0.023) |
Based on a ~550 token structured prompt, ~1,200 output tokens. Unstructured baseline varies by model tier (expensive 2.2×, mid 2.8×, cheap 3.5×). Pricing as of 2026-02-13.
OpenAI (Feb 2026): GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini are retired from ChatGPT; API unchanged for now. For new use we recommend GPT-5.1 or GPT-5.2. Run promptctl models for current pricing.
What that looks like over a year
Same prompt, different usage. Annual savings scale with how often you call.
90% of developers overspend $2,000+/year on unstructured prompts. How much are you wasting?
Start saving in 10 seconds
Single binary. macOS, Linux, Windows. No runtime dependencies.
Download: CLI via Homebrew (recommended) or from GitHub; Mac app (DMG) on the same release page.
Then run promptctl init; optionally add aliases prompt and p so you can run prompt create "..." or p list. Docs: aliases