When you upload a batch of customer feedback to Zointly and see that a feature request has a Demand Score of 8.4, you might wonder: where does that number come from? This post pulls back the curtain on the algorithm — the signals we measure, how we weight them, and why we made those design choices.
Three Signals, One Score
The Demand Score is a composite of three independently calculated signals: Frequency, Urgency, and Sentiment. Each signal is normalized to a 0–10 scale before being combined. The final score is a weighted average, with weights tuned to reflect what actually predicts whether a feature will drive retention.
- Frequency (40%): How many distinct users or data points mention this opportunity? Raw count, de-duplicated by user ID where available.
- Urgency (35%): Does the language signal a blocker, a workaround, or a strong emotional reaction? Phrases like "can't," "broken," "every day," and "losing deals" score high.
- Sentiment (25%): Calibrated negative sentiment — not general negativity, but frustration specifically directed at the missing capability.
Why Frequency Alone Isn't Enough
A naive implementation would just count mentions. But a feature requested 50 times by curious users who already have a workaround is far less critical than one requested 12 times by users who churn because it doesn't exist. Urgency is the signal that separates genuine pain from casual interest.
"The loudest voices aren't always the most important ones. We built the Demand Score to surface the most critical pain — not just the most common mention."
Our Urgency detection is trained on real-world patterns from SaaS support and feedback data. We look for dependency language ("we rely on this for X"), frequency language ("every single time"), and blocker language ("can't proceed without"). Each pattern class has a calibrated weight.
How Sentiment Is Calibrated
General sentiment analysis is prone to false positives in product feedback. A customer saying 'it would be great if you added X' is positive overall — but that doesn't mean the need is low urgency. We specifically model for frustrated-but-constructive sentiment: the emotional register of a user who wants the product to succeed but is being blocked.
Score Normalization Across Datasets
Scores are normalized within each analysis run, not globally. A score of 8.0 means 'this is one of your highest-priority opportunities in this dataset.' It's relative, not absolute — which is why comparing scores across two completely different runs requires caution. If you want to track an opportunity over time, use the Insight Alert system to monitor its score trend.
Pro tip: Filter your dataset to a single user segment (e.g., churned users, enterprise tier) before running an analysis to get a demand score that's specifically relevant to that cohort.
What a Score of 9+ Means
Scores above 9.0 are rare by design. When you see one, it means the opportunity scores near-maximum on at least two of the three signals simultaneously. In practice, this usually indicates a genuine product gap that's causing measurable pain across a significant portion of your user base. Treat it as a P0 for your next planning cycle.
We'll continue refining the algorithm as we analyze more data across different product categories. If you see a score that surprises you — high or low — the Evidence tab on every report shows you exactly which customer quotes drove that signal. Use it to sanity-check the model against your own product knowledge.