
A step-by-step guide to building a market breadth dashboard in under 30 minutes—pick a lightweight tooling stack, ingest and schedule breadth data, model core indicators in SQL, design Grafana panels, and wire up alerting/sharing without overengineering.
A step-by-step guide to building a market breadth dashboard in under 30 minutes—pick a lightweight tooling stack, ingest and schedule breadth data, model core indicators in SQL, design Grafana panels, and wire up alerting/sharing without overengineering.

Market breadth is one of the fastest ways to spot “what’s really happening” under the index—until you’re juggling CSVs, stale screenshots, and indicators that don’t line up by timestamp.
In the next 30 minutes, you’ll stand up a simple, repeatable dashboard that pulls breadth data on a schedule, computes the key indicators correctly, and visualizes them in Grafana with sensible filters. You’ll also add alerts and sharing so your breadth view becomes operational—not just a one-off chart.
You’re building a breadth dashboard, not a data lake. Pick tools that ship in one sitting, then swap pieces later.
A workable stack is: one breadth feed, one place to store it, one dashboard, and one small transform step.
You need breadth inputs that update daily and map cleanly to your watch universe. Start with one primary feed, then add a proxy when coverage is messy.
If you can’t tie breadth to your tradable universe, your signals stay academic.
Your storage choice is really a refresh-frequency decision. If you refresh daily, keep it boring and local.
Postgres fits scheduled refresh and multi-user queries. DuckDB fits single-user, fast local analytics. Sheets fits “good enough” demos and manual updates, like a shared “truth tab.”
Pick a dashboard that matches how you share and how you alert. If it can’t handle your access model, it will stall adoption.
| Tool | Auth | Sharing | Alerts |
|---|---|---|---|
| Grafana | Strong | Team-friendly | Built-in |
| Metabase | Good | Simple links | Basic |
| Looker Studio | Easy embeds | Limited |
Choose the layer that matches your “who needs access” reality, not your feature wishlist.
You’re stitching sources into a few ratios and rolling sums. Use the smallest tool that can run on a timer.
Python + pandas is fastest when APIs are messy. SQL-only transforms are fastest when data is already tabular. dbt pays off when you expect more models and teammates soon.
You need two things first: a dashboard app and a database that won’t surprise you later. Think “one command up, one query verified,” like saying, “If this breaks, it breaks loudly.”
Get Postgres and Grafana running fast, with data saved to disk.
docker-compose.yml with postgres and grafana services.5432:5432 for Postgres and 3000:3000 for Grafana./var/lib/postgresql/data and /var/lib/grafana.docker compose up -d and confirm both containers are healthy.http://localhost:3000, then log in with the admin credentials.If you don’t mount volumes, you’re building a demo, not a dashboard.
Decide your runtime settings once, then stop touching them.
DB_PASSWORD for Postgres auth.API_KEY_* values for your data sources.TZ for consistent timestamps.REFRESH_INTERVAL for ingestion cadence..env file.The goal is boring repeatability, not clever configuration.
Prove the dashboard tool can talk to your database before you build anything.
.env values.SELECT now(); and confirm a timestamp renders.SELECT 1; and confirm a single value panel appears.If SELECT 1 fails, every chart you make is pretend.
Create a small structure that keeps “fetch,” “transform,” and “present” from tangling. Use /ingest for API pulls, /models for SQL transforms, /dashboards for exports and JSON, and /scripts for one-off utilities.
When your breadth signals drift, this layout tells you where to look first.
You only need one reliable feed and a repeatable load job. Treat this like plumbing, not research. If the data lands daily, your dashboard stays honest.
Start with daily counts and volumes that let you compute the classic breadth ratios. Pick one venue first, like “NYSE” or “SPX constituents,” and stay consistent.
If you can’t get this stable, everything downstream becomes guesswork.
A practical MVP source is the daily market diary tables for venues like NYSE/NASDAQ that map directly to adv/dec/unch and volume fields.
Write one script that does three jobs: fetch, normalize, append. Keep it boring, and keep it idempotent.
A clean exit code is your first alerting system.
Define a single raw table that matches the provider fields, not your indicators. Use date and venue as your natural key.
Example:
Primary key: (date, venue). Index venue for fast filtering, and date for range scans. The dashboard will feel instant when your queries stop doing full-table walks.

Schedule the job where you already deploy code, and make failures loud. Daily after the market close is the default.
If ingestion doesn’t page you, your dashboard will quietly lie.
Raw fields are noisy until you turn them into a few boring, repeatable indicators. Model them once, then every chart and alert uses the same math.
Example: store “ad_net” and “ud_vol_ratio” as columns, not ad-hoc query snippets.
Use consistent names and divide-safe math so charts never break on zero days.
| Indicator | Formula | Notes | Output |
|---|---|---|---|
| A-D net | adv - dec | nulls to 0 | ad_net |
| A-D ratio | adv / dec | dec=0 safe | ad_ratio |
| U/D vol ratio | up_vol / down_vol | down=0 safe | ud_vol_ratio |
| NH-NL net | new_highs - new_lows | nulls to 0 | nh_nl_net |
| McClellan inputs | ad_net EMA19/39 | use close dates | mcc_osc_input |
Once these are columns, alerting becomes a filter, not a spreadsheet ritual.
Build three small models so each layer has one job and one grain.
If your dates are deterministic, your backtests stop “moving” between runs.
Breadth breaks silently when the grain drifts or a vendor slips a bad row.
Fail fast here, or you will debug charts instead of data.
Breadth is session-based, not clock-based, so your “date” must match the market session. If you ingest UTC timestamps, map them to the exchange timezone, then assign the trading date from your calendar table.
Holidays and half-days should come from a trading_calendar table, not hardcoded logic. That’s how you keep rolling windows from smearing across non-sessions.
Your dashboard is only useful if it answers breadth questions in one glance. Build a small set of panels that refresh fast and read consistently, like a “traffic report” for participation and risk.
Aim for defaults that work on Mondays at 9:35 and Thursdays at 14:55. If you need to “interpret” every chart, you built research, not monitoring.
Pick panels that map to the questions you ask out loud during a tape read. Keep each panel single-purpose, with one main metric and one clear unit.
If you can’t name the question per panel, cut it.
Write each panel query so Grafana can reuse it across time ranges and filters. Validate on known dates, because “almost right” breadth is just wrong.
If the query can’t survive a spot-check, it won’t survive live trading.
Arrange panels in the order you think, not the order you built them. Put trend first, then participation, then risk, so your eyes flow from “where” to “how broad” to “how fragile.”
Use annotations for known shocks, like “CPI 08:30” or “FOMC 14:00.” Keep units consistent, so “+200” never competes with “0.8x.”
Good layout turns breadth into reflex, not analysis.
Variables keep you from copying panels for every venue or universe. Design them so one query supports many views, without rewriting SQL.
When variables work, the dashboard scales with your curiosity.
Alerts turn a dashboard into a habit, not a weekend project. A simple rule like “email me when participation breaks” keeps you honest on busy days.
Pick thresholds that match how you actually trade and review risk. Use plain rules you can explain in one breath.
Write the rule in human terms first, then encode it.
Route alerts where you will see them within minutes. Then make the system quiet on purpose during planned noise.
If you don’t test delivery, you don’t have alerts.

Sharing is easy until someone edits panels or sees a datasource they shouldn’t. Decide who can view, who can edit, and who can query raw data.
Give viewers read-only roles, and reserve edit rights for one owner. Share via a link or embed, but restrict datasource permissions. Rotate API keys on a calendar, not after an incident.
When an alert fires, you need a playbook, not a debate. Keep it short and executable.
A two-minute runbook beats a perfect postmortem.
Set a timer and ship a first version. Your goal is a working breadth dashboard, not a perfect one.
If you can’t explain the last line, you built a chart gallery, not a decision tool.
Most breadth dashboards fail for boring reasons. Data joins, timezone drift, and charts that lie.
| Pitfall | Symptom | Quick fix | Check |
|---|---|---|---|
| Mixed timezones | Open misaligns | Convert to exchange tz | Compare first bar |
| Bad symbol mapping | Missing tickers | Normalize keys, dedupe | Count before/after |
| Corporate actions ignored | Sudden breadth spikes | Adjust prices, use total return | Spot split dates |
| Partial day data | Breadth collapses midday | Filter by session completeness | Bars per day |
| Wrong y-axis scaling | “Flat” AD line | Use separate axis, normalize | Zero line visible |
If your chart looks “too clean,” you probably filtered away the market.
Once your panels render and your models pass basic quality checks, treat this as a first deploy: lock in the schedule, verify time alignment on a volatile day, and set initial alert thresholds that you can tune after a week of observations. Share a read-only link with your team, attach a short runbook to each alert (what it means, what to check, what to do), and log any false positives as candidates for better filters or data hygiene. From there, your breadth dashboard becomes a living system—easy to maintain, hard to misread, and ready to expand with additional universes or indicators when you need them.
Once your market breadth dashboard is live, the real edge comes from applying it daily to spot leadership and confirm regime shifts before you pick charts.
Open Swing Trading pairs breadth context with daily RS rankings, sector/theme rotation, and watchlist workflows so you can find breakout leaders faster—get 7-day free access with no credit card.