Normal
IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.
IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.
The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.
That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.
Hello and welcome to Aussie Stock Forums!
To gain full access you must register. Registration is free and takes only a few seconds to complete.
Already a member? Log in here.