Normal
IMO, you need to observe the distribution of results. The average doesn't tell anywhere near as much as the way results are distributed, skewed etc.Also, if you start both tests from the same date on every run through there is a fair chance you are introducing start-date bias. For example, the point where you start your test may have had historically good returns, but been poor at producing breakout triggers. The breakout system will always miss this period, and as a kind of tipping point, can account for anomalies between your results.In all my random testing I randomise that start-date by up to 3 months to try and avoid this as a factor.In addition, you haven't stated the total number of trades each system is making. Assuming the holding times are comparible this will tell you the relative exposure each system has to the market. In an upward trending bullmarket, like the one we've just had, just being in the market more often, can often lead to better returns. For this reason, when I do random testing I also randomise a wait period between trades, to simulate the time when a real system would be waiting for it's next signal.The difference in those results is not conclusive. I've done a fair bit of testing with random systems and the truth IMO is that nothing really is conclusive...but I'd be well wary of drawing conclusions from those results.You should also consider some MAE/MFE analysis if you want to measure the effectiveness (or prove the ineffectiveness) of an entry of n-periods.Maybe also consider introducing a random function into the breakout test results to generate even more paths through the data than a simple Monte Carlo will manage.
IMO, you need to observe the distribution of results. The average doesn't tell anywhere near as much as the way results are distributed, skewed etc.
Also, if you start both tests from the same date on every run through there is a fair chance you are introducing start-date bias. For example, the point where you start your test may have had historically good returns, but been poor at producing breakout triggers. The breakout system will always miss this period, and as a kind of tipping point, can account for anomalies between your results.
In all my random testing I randomise that start-date by up to 3 months to try and avoid this as a factor.
In addition, you haven't stated the total number of trades each system is making. Assuming the holding times are comparible this will tell you the relative exposure each system has to the market. In an upward trending bullmarket, like the one we've just had, just being in the market more often, can often lead to better returns. For this reason, when I do random testing I also randomise a wait period between trades, to simulate the time when a real system would be waiting for it's next signal.
The difference in those results is not conclusive. I've done a fair bit of testing with random systems and the truth IMO is that nothing really is conclusive...but I'd be well wary of drawing conclusions from those results.
You should also consider some MAE/MFE analysis if you want to measure the effectiveness (or prove the ineffectiveness) of an entry of n-periods.
Maybe also consider introducing a random function into the breakout test results to generate even more paths through the data than a simple Monte Carlo will manage.
Hello and welcome to Aussie Stock Forums!
To gain full access you must register. Registration is free and takes only a few seconds to complete.
Already a member? Log in here.