Quote Originally Posted by Pascal View Post
You are right about the 2008-2009 period. I copied below the figure that is in the original GDX reference document.

In blue, you can see the returns of the previous version of the model. These returns were negative in 2011. It is more critical, because it means that the earlier version of the model was good for trend following, but not that good in a choppy market such as last year. Hence, the new version, in which we introduced a sell-in-strength rule, is showing good results in the two types of markets.

However, you are right: we will not have again the same sort of results as in 2008-2009.
However, I am pretty confident that the model is now robust enough to sustain any type of market.

There are two other aspects that are important (to my eyes) about this model:

1. At each stage, we can measure in real time the distance to the next signal and issue an e-mail alert (at least we are/will be testing these features.) This means "user's freedom". We (you) will be able to walk around with an iphone, get alerts on line in real-time and act if necessary. The heavy work is done by the model and the computers. We'd issue about two alerts per months/per model, which are easy to handle.
2. I was able to extract from the model the parameters that are sector dependent: Stop level, overbought and oversold levels, ATR level, porosity. These parameters are depending only on the volatility of a specific sector. This means that it should be possible to test the same model on different sectors without the need to redevelop the whole logic: only sector specific parameters would be adapted. This means that we will be able to easily run models on different ETFs.

In conclusion, what we will be offering here is
- number crunching capabilities that few hedge funds can get
- back tested/safe trading models
- the freedom NOT to have to stay close to the computer.

In a broad sense, this is our goal.

Pascal
A big part of the challenge in developing quantitative trading models is reducing the number of parameters used (aka the dimensionality curse) in order to avoid model overfitting to available (and often limited) data while maximizing model accuracy. I find the notion of a single model that can apply to multiple sectors particularly promising since it provides for a more diverse test data set while minimizing parameter inflation through a consistent approach to parameter values selection for each sector.

Trader D