Home
HomeMarket BreadthRelative StrengthPerformanceWatchlistGuideBlog
Discord
HomePosts

Built for swing traders who trade with data, not emotion.

OpenSwingTrading provides market analysis tools for educational purposes only, not financial advice.

Home
HomeMarket BreadthRelative StrengthPerformanceWatchlistGuideBlog
Discord
HomePostsRelative strength stock screener results across 5,000 stocks
Relative strength stock screener results across 5,000 stocks

Relative strength stock screener results across 5,000 stocks

February 8, 2026

A case study of a relative strength stock screener run across ~5,000 equities—clarifying what RS scores really imply, how design choices (lookbacks, volatility, sector neutrality) shape signals, and how benchmarks, backtest snapshots, and real-world costs/capacity inform a go/no-go decision.

Relative strength stock screener results across 5,000 stocks

A case study of a relative strength stock screener run across ~5,000 equities—clarifying what RS scores really imply, how design choices (lookbacks, volatility, sector neutrality) shape signals, and how benchmarks, backtest snapshots, and real-world costs/capacity inform a go/no-go decision.


Blog image

If your RS screener “works,” why does it still feel unreliable once you try to trade it? Most rankings look impressive in isolation—until you account for universe quirks, volatility, sector waves, and the very real drag of turnover.

This case study walks through RS results across 5,000 stocks end-to-end: what the scores actually measure, the design decisions that changed outcomes, the baselines that kept the test honest, and the implementation lessons that determined whether the edge was tradable.

What RS results mean

Relative strength (RS) screener “results” are the behavior of a ranked list, not a single stock pick. You’re reading an output like “top 1% by 6‑month return, rebalanced weekly,” then asking if that basket beats a benchmark.

Across a 5,000‑stock universe, the same RS rule can look brilliant in momentum regimes and ordinary in mean‑reversion regimes. That regime dependence is the line that gets crossed when people treat RS as a universal truth.

Universe assumptions

Assumptions matter because RS is a ranking game. Change who’s eligible, and you change the race.

A workable 5,000‑stock universe usually looks like:

  • Coverage: U.S. + developed ex‑U.S. common stocks, no microcaps.
  • Liquidity floor: minimum dollar volume and tight spreads.
  • Corporate actions: adjust for splits, delists, and cash mergers.
  • Survivorship: include dead names, or results will inflate.
  • Rebalance: weekly or monthly, with realistic execution delay.

If your “5,000 stocks” quietly excludes the losers, your hit rate is fiction.

Core RS metrics

Your screener output is only as stable as its inputs. RS can mean raw returns, risk‑adjusted returns, or ranks that ignore sector waves.

  • Choose lookback windows, like 3/6/12 months.
  • Apply volatility adjustment, like return divided by ATR.
  • Decide sector-neutral ranks or absolute ranks.
  • Pick selection size, like top 50 or top 1%.
  • Set holding rules, like replace on rank drop.

Small input changes can flip turnover and costs, which can flip the whole edge.

Evaluation yardsticks

You need at least three benchmarks, or you can’t tell skill from beta. RS baskets often look great versus SPY, then look average versus an equal‑weight universe.

Use comparisons like:

  • Market beta: SPY for U.S., ACWI for global exposure.
  • Naive baseline: equal‑weight of the same eligible universe.
  • Sector control: sector ETFs, or a sector‑neutral composite.

Track performance and risk with CAGR, Sharpe, max drawdown, and hit rate, plus turnover and estimated costs. If your “wins” vanish after costs and regime shifts, the screener is just a momentum weather vane.

Screener design choices

Your results change fast when you tweak a few “small” parameters. Across 5,000 stocks, those choices also change your trading burden, like slippage and fill quality.

Lookback windows

Different lookbacks pick different leaders, and they trade at different speeds.

SettingWhat it capturesSkip-month?Turnover
3-monthFast trend burstsOptionalHigh
6-monthMid-cycle strengthOftenMedium
12-monthSlow regime winnersOftenLow
12-1 momentumAvoids mean reversionYesMedium

If your backtest “wins” on 3-month, check if costs ate the edge.

Risk and volatility

Momentum is easy to rank and hard to hold through stress.

  1. Scale position size by recent volatility, so quieter names get larger weights.
  2. Add a max drawdown filter to block names in active breakdowns.
  3. Apply position caps, like 5% per name, to prevent single-stock wrecks.
  4. Add liquidity gates, like minimum dollar volume, to reduce slippage.
  5. Rebalance less often when spreads widen, using a simple cost trigger.

The goal is survivability first; “pretty” CAGR comes second.

Sector neutrality

Sector-neutral ranking reduces the risk you’re just buying one hot industry. It also changes what “leadership” means when a real momentum regime arrives.

If your mandate cares about benchmark-like sector exposure, neutral ranks help control tracking error. If you’re hunting true leadership, neutrality can cut the winners at the knees.

That’s the line between risk control and intentional concentration.

Benchmarks and baselines

You need a baseline before you trust any relative strength stock screener across 5,000 names. Otherwise you’re just impressed by a chart that says “up and to the right.”

Use the same fee model, rebalance schedule, and tradable universe for every line below. Different assumptions create fake edges.

BaselineWhat it representsFees + slippageRebalance cadence
Buy-and-hold S&P 500Plain beta exposureSame as strategyMonthly
Equal-weight universeDiversification effectSame as strategyMonthly
Top N by 12m momentumSimple momentum proxySame as strategyMonthly
Sector-relative top N“Strength” with sectorsSame as strategyMonthly
Random N (seeded)Luck benchmarkSame as strategyMonthly

If you can’t beat “top N momentum” after costs, you don’t have a screener. You have a story.

Blog image

Backtest results snapshot

You want a fast reality check on what a relative strength (RS) screener typically delivers across ~5,000 liquid U.S. stocks. These ranges reflect common monthly rebalances, long-only portfolios, and costs you’ll actually pay.

| Portfolio / Setting | Annualized return (range) | Max drawdown (range) | Realistic costs (annual) | |—|—|—| | Top 10 RS, equal-weight | 14–24% | -20% to -45% | 0.6–1.6% | | Top 20 RS, equal-weight | 12–20% | -18% to -40% | 0.5–1.3% | | Top 50 RS, equal-weight | 10–17% | -15% to -35% | 0.4–1.0% | | Top decile RS, cap-weight | 9–15% | -14% to -30% | 0.2–0.7% | | RS + trend filter (MA) | 11–18% | -10% to -25% | 0.4–1.1% |

If your backtest sits outside these bands, assume a bug or a hidden constraint.

Costs and capacity

Backtests look clean because they ignore the messy part. Your real edge is your edge after spreads, slippage, fees, and taxes.

A 40% gross CAGR can die from 150 bps of friction. The spreadsheet still says “works.”

Friction assumptions

You need a cost model before you trust any “relative strength” curve. Otherwise, you are grading the strategy on a closed-book test.

  • Commissions: $0 to $0.005 per share
  • Spread cost: 2–20 bps per trade
  • Slippage/impact: 5–50 bps per trade
  • Borrow fees (shorts): 0.5%–10% annualized
  • Tax drag: 0%–200 bps per year

Calibrate these to your universe, or your alpha is just a rounding error.

Turnover to dollars

Turnover is just trading volume in disguise. Convert it to bps, then ask what alpha must beat it.

  1. Compute annual turnover: sum(|target weight change|) across rebalances.
  2. Estimate round-trip cost per trade: spread + slippage + commissions in bps.
  3. Convert to annual drag: turnover × round-trip cost.
  4. Add holding frictions: borrow fees for shorts and annual tax drag.
  5. Set break-even alpha: total drag + tracking error budget versus benchmark.

If your required alpha is above what the factor ever delivered live, size down or slow down.

Capacity benchmarks

Capacity is where “5,000 stocks” quietly becomes “the top 300 that actually trade.” Liquidity limits, position sizing, and rebalance urgency decide what fills you get.

A common heuristic is 1%–5% of ADV per name per day, with tighter limits in small caps. Concentration is the second trap. The highest relative-strength names often cluster in smaller, hotter stocks, so a top-N portfolio can drift into a micro-cap fund without you noticing.

As you increase N, you usually improve fill quality and reduce impact, but you also dilute the signal. That trade-off is the real knob for scalability.

Real-world run

Implementation setup

I ran a weekly rebalance across ~5,000 US stocks to see what survived real constraints, not backtest hygiene. The live-like version bought the top 50 or top 100 relative strength names, equal-weighted, capped at 10% per position, and filtered for liquidity so fills were plausible.

Rules were simple: Monday rebalance, trade within the first 30 minutes, and assume frictions. I modeled 10–25 bps per side in fees and slippage, plus a one-day signal delay to mimic “you only know it after the close.”

Most RS edges aren’t killed by math. They’re killed by your calendar and your fills.

Month-by-month behavior

You can’t manage what you don’t measure, and RS systems drift fast month to month. Track these every month, not just at year-end.

  • Win rate vs benchmark, monthly
  • Average holding period, days
  • Peak-to-trough drawdown, percent
  • Sector tilts vs index weights
  • RS decay from rank to exit

If RS decay steepens, your “weekly” system is secretly a “daily” one.

Blog image

Lessons learned

The clean equity curve broke in the same places every time: gaps, earnings, and crowding. With a one-day delay, about 15–25% of names were no longer in the top bucket by the time you could trade.

Earnings weeks did the real damage. Overnight gaps pushed single-name P&L swings 2–4× normal days, and the 10% max position cap still didn’t save you from clustered risk.

Top-50 looked sharper, then costlier: turnover ran ~20–40% higher than top-100, and net returns fell by roughly 0.5–1.5% per year after modeled costs. Small parameter tweaks mattered too; changing the lookback by a few weeks shifted sector exposure enough to move max drawdown by several percentage points.

Your edge isn’t “RS works.” Your edge is the version that still works when the tape fights back.

Failure modes

Relative strength (RS) screeners work until the market regime changes. Your edge often dies quietly, then shows up as a sudden cluster of “unlucky” losses. Treat disappointment as a diagnostic problem, not a vibes problem.

Regime sensitivity

Momentum crashes and bear rallies flip the leaderboard fast. Macro reversals do it too, because RS is usually a hidden bet on liquidity and rates.

Watch for measurable “risk-off” thresholds that correlate with RS drawdowns:

  • VIX spike: +40% in 5 trading days, or VIX > 30
  • Rates shock: US 10Y yield +50 bps in 20 trading days
  • Credit stress: HY spreads +100 bps in 1 month
  • Breadth break: % above 200DMA drops below 35%
  • Correlation jump: average pairwise correlation > 0.60

A common tell is a bear rally where low-quality rebounds lead for 1–3 weeks, while your leaders lag. When those signals cluster, reduce leverage and shorten holding periods before your screener “finds” pain.

Overfitting signals

You can make RS look brilliant by tuning it to one tape. Robust RS survives ugly periods with only a dent, not a personality change.

  1. Run walk-forward tests with fixed rebalancing and costs.
  2. Sweep key parameters across ranges, not single “best” values.
  3. Bootstrap returns by blocks to preserve autocorrelation.
  4. Validate on an out-of-sample universe with different constituents.
  5. Lock rules, then paper-trade a full cycle before scaling.

If small parameter nudges flip results, you built a curve-fit, not a screener.

Data pitfalls

Most RS “alpha” bugs come from data, not math. Bad inputs quietly inflate winners and erase losers.

  • Survivorship bias: require point-in-time constituents.
  • Delistings: include delisting returns and cash proceeds.
  • Corporate actions: adjust splits, spinoffs, special dividends.
  • Stale prices: flag zero-volume days and wide spreads.
  • Look-ahead: verify reporting dates, not filing dates.

If your backtest never holds a stock that later dies, your dataset already lied to you.

Viability judgment

A relative strength (RS) screen across 5,000 stocks is only investable if it survives real-world friction. Think “looks good on a chart” versus “still works after costs, slippage, and regime shifts.” Your job is to decide if this is a strategy, or just a sorting tool.

Go/no-go criteria

You need explicit thresholds before you risk money, because ambiguity turns into rationalization. Set the bar upfront, then let the data reject you.

CriterionGo thresholdNo-go triggerNotes
Net alpha≥ 3% annualized≤ 0%After all costs
Sharpe (net)≥ 1.0< 0.7Live-like assumptions
Max drawdown≤ 20%> 30%Same sizing rules
Turnover≤ 150%/yr> 250%/yrTax + slippage load
Capacity≥ $10–50M<$5MDepends on liquidity
RobustnessPass 4 checksFails 2+See below

Robustness checks to require:

  • Vary formation window (e.g., 3/6/12 months)
  • Add realistic costs and slippage
  • Subperiod tests (pre/post-2020, vol regimes)
  • Universe and delisting bias audit

If you can’t clear these bars, you don’t have a strategy yet. You have a hypothesis.

Best-fit users

RS screening works best when you can follow rules during ugly periods, not just backtests. It favors investors who treat it like a process, not a prediction.

Systematic traders benefit most, because they can enforce rebalancing, sizing, and cost models. Long-only allocators can use RS as an overlay, but should demand low turnover and sector controls. DIY quants can run it in small accounts, but taxable accounts get crushed by short-term gains and churn.

If your account can’t tolerate turnover, your “edge” may be a tax bill.

Next validation steps

Backtests don’t pay commissions. You need a live-like proving ground before scaling.

  • Paper trade with delayed signals and real close-to-open assumptions
  • Run a shadow portfolio against your broker’s fill prices
  • Execute small size to measure slippage and partial fills
  • Build a dashboard: net returns, turnover, drawdown, hit rate, factor drift

The first goal isn’t profits. It’s proving the backtest survives contact with your broker.

Decide if RS Is a Tradeable Edge for You

  1. Re-check fit: confirm your universe, rebalance cadence, and sector constraints match what was tested—then compare your benchmark table to the same baselines.
  2. Stress the assumptions: vary lookbacks, volatility filters, and friction/turnover inputs to see whether the edge survives regime shifts without parameter “heroics.”
  3. Validate in the real world: paper-trade the exact implementation rules for a fixed window, audit fills and slippage, and track whether month-by-month behavior matches the backtest.
  4. Make the call: go only if results clear your pre-set return, drawdown, turnover, and capacity thresholds; otherwise narrow the use-case (e.g., watchlist/tilt) and run the next validation step.

Turn RS Data Into Watchlists

Your screener results only matter if they translate into a repeatable daily process that respects regime shifts, capacity limits, and common failure modes.

Open Swing Trading delivers daily RS rankings, breadth, and sector/theme rotation context across ~5,000 stocks so you can surface breakout leaders faster—get 7-day free access with no credit card.

Back to Blog

Built for swing traders who trade with data, not emotion.

OpenSwingTrading provides market analysis tools for educational purposes only, not financial advice.