
A data-driven case study of a market breadth dashboard built on 5,000 stocks over 12 months—scope and indicator design, dashboard/alert architecture, headline participation benchmarks, and practical risk-on/risk-off use cases to judge real trading viability.
A data-driven case study of a market breadth dashboard built on 5,000 stocks over 12 months—scope and indicator design, dashboard/alert architecture, headline participation benchmarks, and practical risk-on/risk-off use cases to judge real trading viability.

If your index is making new highs while fewer stocks are participating, are you seeing strength—or a warning sign? Most traders feel that divergence, but without a systematic way to measure it, breadth becomes a vague narrative instead of a decision tool.
This case study walks you through a 12‑month, 5,000‑stock breadth dashboard: how the universe and pipeline were defined, how signals were ranked and alerted, what the headline results looked like, and how the indicators translated into real risk-on/risk-off actions (including where they failed).
You’re looking at breadth across 5,000 stocks over the last 12 months. The goal is simple: turn “the market feels strong” into numbers you can audit.
The universe targets 5,000 common stocks, built to represent investable U.S. equity breadth, not microcap noise. Think “NYSE/Nasdaq/NYSE American, no ADRs,” with rules that stay stable month to month.
Selection rules:
If the list changes daily, your breadth changes for the wrong reason.
The pipeline prioritizes repeatability over cleverness, because dashboards fail when inputs drift. Every run follows the same schedule and the same transformations.
Automate the boring parts, or you’ll debug “mystery signals” forever.
These indicators cover participation, trend health, and internal momentum, without overfitting. Each one answers a different version of “is strength broad?”
When price rises but participation falls, that’s the line that gets crossed.
Benchmarks anchor the dashboard so alerts mean something across regimes. For example, a “healthy” tape often holds >55% above the 50DMA, while stress regimes push <40% and stay there.
We set alerts on threshold breaks plus persistence, not single-day prints. Target false alarms stay under one per quarter, or you’ll stop trusting the tool.
Your benchmark isn’t the market’s mood. It’s your dashboard’s error budget.
Your dashboard should answer viability in one glance: are trends intact, is participation broad, and is risk contained. Treat it like a cockpit, not a scrapbook; if a chart can’t change a decision, delete it. Example rule: if you can’t point to a panel and say “buy, hold, hedge,” it’s noise.
You need three panels because they answer three different questions, fast: direction, confirmation, and damage. Redundancy creeps in when you plot the same story at three timeframes with new colors.
Minimum set that stays non-overlapping:
If two charts move together for weeks, keep the one that changes actions sooner.
You need a hierarchy so your dashboard doesn’t become a voting system. Primary signals set the regime; secondary signals adjust sizing and patience.
When signals tie, hold the prior regime until a primary breaks for three closes.
Run it on a clock so “checking breadth” never becomes “doomscrolling breadth.” Each cycle has one output and one automation check.
If the daily run takes over 10 minutes, you’ve built a research platform, not a dashboard.
Alerts should fire on thresholds with persistence, not single prints. Use a simple rule like “cross + 3 closes,” then apply a cooldown so you don’t get whipsawed into reactive trades.
Example: trigger a risk alert when % below 50D exceeds your cap for 3 days, then suppress repeats for 10 trading days unless a worse tier hits. Log every alert with the action taken, even if it’s “no trade,” and review those logs monthly for false positives.
If you can’t explain an alert in one sentence, you can’t trust it under stress.
One table, no drama. You want the counts, hit rates, and drawdown context versus the big benchmarks.
| Metric | Universe / Benchmark | 12-month result | Read-through |
|---|---|---|---|
| Breadth thrusts | 5,000 stocks | 7 events | Risk-on pulses |
| 10D adv/dec hit rate | 5,000 stocks | 58% up weeks | Slight edge |
| % above 200D MA | 5,000 stocks | 44% avg | Narrow leadership |
| Max breadth drawdown | 5,000 stocks | -31% | Internal stress |
| Index max drawdown | SPX / NDX / RUT | -12% / -18% / -22% | Pain differed |
When breadth drawdowns outrun index drawdowns, your “index is fine” read is usually late.

You need benchmark ranges to tell “normal digestion” from “silent breakdown.” We used 5,000 stocks over 12 months to quantify how often participation stayed healthy, and how often it sagged.
We tracked the monthly distribution of stocks above their 50DMA and 200DMA to set usable guardrails. Think “how many names are actually trending,” not “what the index printed.”
Across months, the median % above 50DMA lived in a broad, tradable middle band, while the tails flagged regime shifts:
If you’re below 40% for weeks, stock-picking turns into damage control.
Advance-decline compresses the whole market into one number you can sanity-check daily. It’s also where “everything feels heavy” shows up first.
When thrusts vanish, rallies tend to be narrower than they look.
New highs minus new lows gives you a clean “risk-on vs risk-off” regime meter. The trick is defining it consistently, then mapping extremes to forward returns.
The useful edge is asymmetry: panic lows tended to mean-revert faster than euphoric highs extended.
Breadth diverges from a cap-weighted index when a few mega-caps carry the tape. Those are the periods where your “market read” and your “portfolio reality” split.
We saw three repeatable divergence patterns across the year:
When correlation drops and lag days stack up, treat the index as a headline, not a health check.
You want breadth signals to change positions, not just your mood. Treat them like rules: define triggers, expected hold time, and a whipsaw budget.
Example: “breadth thrust + >60% above 50DMA” becomes a risk dial, not a one-day bet.
Use risk-on triggers when participation is expanding, not just prices. You’re hunting for regimes where breakouts stick for weeks.
Expect ~55–65% win rates, with losers clustering in choppy markets. Your edge comes from avoiding the “one-and-done” thrust day.
Risk-off is about protecting time and capital when participation shrinks. You want a confirmation window, so you don’t sell the first wobble.
You’re not forecasting the top. You’re refusing to fund the drawdown phase.
Map a composite breadth score to exposure bands, so your sizing is boring and repeatable. A simple scheme is 30/60/90% gross exposure tied to weak/neutral/strong participation.
Example: score <40 gets 30%, 40–65 gets 60%, and >65 gets 90%. Higher bands raise turnover, so add a rebalance threshold like “only resize after a 10-point score move.”
Aim your guardrail at process metrics: a max drawdown target like 8–12% per sleeve. If drawdowns exceed that, cut bands before you tweak signals.
Breadth fails in predictable ways, especially in crowded or illiquid tape. Call them out early, then bake checks into your dashboard.
When signals “work” only on charts, it’s usually your data or universe definition. Fix that first.
April was a choppy, headline-driven month, with daily swings that felt like “risk-on, risk-off” on repeat. Realized volatility ran ~22% annualized, and your breadth started neutral-weak: 52% above the 200-day, 41% above the 50-day, and an advance/decline line drifting lower. You began at 60% equity exposure, aiming for two metrics: beat the benchmark net of turnover, and keep max drawdown under 3% for the month.
You need fixed checkpoints so your dashboard doesn’t turn into a panic button.
You’re tracking execution, not just returns.

The dashboard worked when signals agreed across timeframes, like new-lows contracting while 200-day breadth held. It failed when you treated fast indicators as trade triggers; the Week 2 “panic prints” faded in three sessions and cost turnover. You changed one rule: require two consecutive closes of new-lows above 2.5% before any hedge adds. Noise dropped, and the next month’s false alerts fell from two to zero while keeping similar drawdown.
You can build a breadth dashboard yourself, or buy one. The real cost sits in data licensing, cleaning, and ongoing reliability.
A practical comparison helps you decide before you commit budget and headcount.
| Approach | One-time build | Monthly run | Ongoing maintenance |
|---|---|---|---|
| DIY + free APIs | 2–6 weeks | $0–$200 | High, brittle |
| DIY + paid market data | 4–10 weeks | $500–$5,000 | Medium-high |
| Cloud quant stack + data | 2–8 weeks | $800–$8,000 | Medium |
| Vendor breadth dashboard | 1–3 days | $300–$3,000 | Low |
| Enterprise data + custom BI | 8–16 weeks | $5,000–$25,000 | Medium-low |
If you can’t support data fixes weekly, pay for the boring option.
Your 12-month, 5,000-stock breadth dashboard is a go for risk-aware decision support, not a standalone alpha engine. It paid when it reduced exposure during deterioration, and when it stopped you buying “index up, internals down” rallies. Treat it like a cockpit warning system, with rules and benchmarks, or you’ll just add noise.
Breadth shines when you need earlier signal than price alone, and when you’re policing false strength. The best moments were the ones where the tape said “fine,” but participation said “fragile.”
Early risk-off warnings
Rally confirmation
Avoiding narrow leadership traps
If breadth moves first, you get time. Time is the edge.
Breadth loses edge when regimes flip faster than your indicators can update, or when the index stops representing the market. Your dashboard becomes descriptive, not predictive.
When the market is discontinuous, breadth is late. Trade smaller or step aside.
You need a decision process, not more charts. The goal is one behavior change you can measure.
If you can’t write the rule, you can’t evaluate the edge.
Go, with constraints: use it if you run tactical allocation, risk overlays, or systematic entries that suffer in “narrow tape” regimes. Don’t use it if you trade catalysts or headline-driven macro, where gaps dominate outcomes.
Minimum data hygiene: stable universe definitions, survivorship-bias controls, corporate-action adjustments, and consistent indicator lookbacks. If you can’t explain one day’s membership changes, you can’t trust the signals.
Adopt only if it beats a simple benchmark, like “price-only regime filter,” by a clear margin: fewer large drawdowns, similar or better returns, and no explosion in turnover. If the improvement is small, keep the dashboard as monitoring, not as a trigger.
A market breadth dashboard is only useful if it consistently translates into clearer regime context and a short list of actionable leaders each day.
Open Swing Trading pairs daily breadth, sector/theme rotation, and volatility-adjusted RS rankings across ~5,000 stocks to speed up discretionary stock selection. Get 7-day free access with no credit card.