Pricing & Credits Policy
This page explains how access, features, and usage are billed. Pricing will evolve; treat this as a living policy.
Final pricing is not yet formalized. This policy outlines the intended structure for planning and transparency.
Components of pricing
- Fixed access fee: unlocks the app and base features
- Tiered features: higher tiers unlock advanced capabilities
- Usage-based credits: variable cost tied to AI processing and related usage
Credits system
Credits represent usage and are consumed per AI prompt based on token counts.
Benchmark model
- A benchmark model (e.g., GPT-4o) provides the reference cost per input and output token.
- Each supported model has a multiplier relative to the benchmark (provider/model dependent).
Credit calculation (conceptual)
- Measure tokens for input and output
- Compute cost at benchmark rates
- Apply model multiplier
- Deduct resulting credits from the user’s balance
input_tokens = 500 output_tokens = 300 benchmark_rate_in = X credits per input token benchmark_rate_out = Y credits per output token model_multiplier = M (e.g., 0.5 for cheaper, 2.0 for pricier)
credits = (500 * X + 300 * Y) * M
=== Notes
```text
- Rates (X,Y) and multipliers (M) are subject to change as providers update pricing.
- We may normalize small operations to a minimum charge to cover fixed overhead.
## Balances and limits
- Users maintain a credits balance visible in the navbar and account page
- Workflows enforce `aiConfig.max_credits_per_post` to bound per-item spend
- When a balance is low, the UI surfaces warnings; at zero, AI-dependent features stop
## Tiers (indicative; subject to change)
- Starter: base scraping, limited AI, lower concurrency
- Pro: expanded AI, higher concurrency, Node Editor access
- Enterprise: custom limits, SSO, priority support
## Refunds and adjustments
- Adjustments may occur when providers retroactively change pricing or if anomalies are detected
- Support can credit accounts for verified issues
## FAQ
- How do I estimate monthly cost? Track average tokens per prompt and multiply by frequency; apply your tier and model multiplier.
- What if I switch models mid-run? Each prompt is charged at the active model’s multiplier at execution time.