Backtesting and optimization are really important for algo trading.
In this list, we will show some resources related to this process and how to do it the right way.
Q: How many trades should I have in my backtest? #
The simplest rule of thumb is that the more trades the better, 100 is ok but you ideally want more (200)
Which at M15 will likely be closer to 1 year time. Also, it's important to avoid over-optimization so a walk forward and In sample vs out of sample, the process is important to recommend using 60-70% of the data as optimization and the rest as validation, that way you can see if you're settings where over-optimized or if they're still good after your optimization.
Q: How do you know when the market changes and you need to reoptimize or stop with a certain pair? #
When do we re-optimize or work on the symbol, and pairs again?
You can have 2 processes.
- you only do it once it breaks certain risk parameters.
- eg. backtest max DD was 10% at 1% per trade and you run it as 1% risk per trade. if live breaks 12-15% max DD then you drop it and re-optimize.
- You do a constant optimization on given time slots (multi-stage walk forward).
- eg. you do a multi-stage walk forward optimization and you optimize on the past 2 years of data and run the settings for 6 months. So every 6 months you take the last 2 years of data to perform your optimization.
Free Video Series on “Algorithmic Backtesting & Optimization” #
Martyn Tinsley has made a lot of tutorials in which he breaks down this concept into really simple ways of visualizing them. This video and his “Algorithmic Backtesting & Optimization” playlist should give you more than enough material for 3 months of study into these more advanced topics as well as some basic topics.
Key Takeaways from the “Algorithmic Backtesting & Optimization” playlist #
Written by member Mirza
I will try to sum up the most important factors of the Algo trading and the optimization process mentioned in these educational series from the Darwinex youtube channel:
- Optimization results can be very tricky and due to randomness and overfitting system to certain conditions, it will very often give you fake best parameter inputs in the optimization results report adopted to the market noise.
- More sample data / more accurate opt results (10y period is optimal)
Increase uncorrelated sample size to avoid overfitting by:
- Increasing the duration of backtest/optimization (10y etc.)
- Doing the Multi-Symbol & Multi-Timeframe optimization (mt5 only)
The goal is (if possible) to have the same input parameters for all instruments.
- Improve results by completely avoiding backtesting (and trading) the period when the price was impacted by the news (high impact news only, this could be coded and used for backtesting too, using API.)
- Eliminate the trades from the report that drastically deviates from your average (if possible, that could be coded in the tester for filtering those results in the optimization results report in the first place)
- Never optimize too many parameters (up to 3 parameters per optimization) to avoid your optimization giving you results adapted and overfitted to noise and stochastic movements, which are not predictable models to use in the future for the trend-based strategy
- Example of correct optimization process:
- Run (In Sample) optimization for 5y period (etc. 2012.01.01 – 2017.01.01)
- Use these parameters and run backtest (Out Of Sample) on the next 1.25y period (2017.01.01. – 2018.03.03)
- In-sample & Out-of-Sample, periods have a 3:1 ratio which means for every stage, 75% is In-Sample data and 25% is Out-of-Sample data. After that, we repeat the process (stages) and shift our optimization process forward for the length of the Out-of-Sample data. Repeat the process over again until your last optimization period ends at the end of your tick data (preferably today), and that last optimization would give you the input parameters that should be used for your live trading account.
The real measure of your robot's efficiency and optimization quality would be shown on the chart created by merging all your Out-of-Sample data results into one report.