Can You Trust Backtest Results? 5 Ways They Lie to You

Every Backtest Is a Liar. The Question Is How Much.

I’ve built and tested over 100 trading strategies. The vast majority looked phenomenal in backtests and crumbled in live trading. A strategy showing 500% returns on historical data would deliver 15% — or negative returns — in real time. After years of this gap between backtest and reality, I’ve identified the five main ways backtests deceive you.

Understanding these traps won’t make backtests useless. But it will make you a much more skeptical consumer of trading strategies — including your own.

Lie #1: Overfitting (The Silent Killer)

Overfitting is when your strategy learns the past instead of identifying patterns. With enough parameters, you can “fit” any strategy to perfectly predict historical price action. The result looks incredible on past data and fails completely on new data.

How to Detect Overfitting

  • Parameter count: If your strategy has more than 5-6 adjustable parameters, you’re probably overfitting. Each parameter gives the model more room to bend to historical noise.
  • Sensitivity test: Change each parameter by 10%. If results change dramatically, the strategy is fragile and likely overfit.
  • Out-of-sample testing: Split your data in half. Optimize on the first half. Test on the second half without changes. If performance drops significantly, it’s overfit.

My Rule

I use 6 indicator filters. Each filter is conceptually motivated (trend, momentum, volatility) — not just statistically convenient. If I can’t explain WHY a parameter value should work (not just that it DID work), I don’t use it.

Lie #2: Survivorship Bias

Your backtest runs on today’s data. But some of the tokens you’re testing against didn’t exist 3 years ago, and some that existed 3 years ago are now dead. This creates a bias: you’re only testing against assets that survived, which are inherently the better-performing ones.

Example: backtesting a strategy on “Top 50 altcoins” in 2026 includes Solana, Avalanche, and Arbitrum. But in 2021, the top 50 included Luna, FTT, and LUNC — all of which collapsed. Your backtest never tests against these failures, making results look better than they would have been in real time.

Mitigation

  • Test on assets you’ve traded from the start, not retroactively selected winners
  • If backtesting on multiple assets, include delisted tokens in your test universe
  • Be skeptical of strategies that only work on one specific asset

Lie #3: Perfect Execution Assumption

Backtests assume you buy or sell at the exact price shown on the chart at the exact moment the signal fires. In reality:

  • Slippage: Market orders execute at the next available price, which can be significantly different from the signal price during fast moves.
  • Latency: Even automated systems have processing and network delay. In a 4H timeframe strategy, this matters less. In scalping, it’s everything.
  • Liquidity gaps: During crashes, order books thin out. Your “market sell” at $3,000 might fill at $2,900.

Mitigation

Add slippage and commission costs to your backtest. Most platforms (TradingView included) let you set commission rates. I use 0.044% per trade (Bybit taker fee). Also add 0.05-0.1% slippage for conservative estimates.

Lie #4: Look-Ahead Bias

This is a technical error where the strategy uses future data that wouldn’t be available at the time of the trade. Common in custom-coded strategies where indicators are calculated across the full dataset before trading logic runs.

Example: A strategy that uses today’s close price to make a decision before today’s candle is finished. The backtest shows you entering at the perfect price because it “knew” where the candle would close.

Mitigation

  • Only use confirmed (closed) candle data for signals
  • Pine Script handles this correctly by default if you use close (previous bar) rather than real-time tick data
  • If coding your own backtest engine, be paranoid about data alignment

Lie #5: The “Best Period” Selection

Showing a backtest from June 2023 to March 2024 (a strong bull run) will make almost any strategy look good. Honest backtests cover full market cycles — at minimum one bull run AND one bear market, ideally two of each.

My strategy is tested across 5+ years covering the 2021 bull, the 2022 bear (Luna, FTX), the 2023 recovery, and the 2024-2025 cycle. The 29,898% return includes periods where the strategy lost money for months. That context matters.

Related Reading

Red Flags

  • Backtest period shorter than 2 years
  • Only showing results from obviously favorable market conditions
  • No drawdown information provided
  • Cherry-picked start/end dates that maximize returns

How I Validate My Own Strategy

  1. Full-cycle testing: 5+ years across multiple market regimes
  2. Walk-forward analysis: Optimize on 2020-2022 data, validate on 2023-2025 data
  3. Parameter stability: All key parameters can be shifted ±10% without destroying performance
  4. Realistic costs: 0.044% commission built into every trade
  5. Public verification: TradingView backtests are reproducible. Anyone can apply the same script to the same chart and verify the results.

No backtest is a perfect predictor. But a backtest that accounts for these five lies is far more trustworthy than one that doesn’t. The goal isn’t certainty — it’s informed confidence.

2 thoughts on “Can You Trust Backtest Results? 5 Ways They Lie to You”

  1. Pingback: AutoBot: How $10,000 Became $1,427,839 — Full 5-Year Backtest Revealed - GodStary Dev Log

  2. Pingback: Why It Took 5 Years to Build a Profitable Trading Bot — A Developer’s Diary - GodStary Dev Log

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top