+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 11 to 12 of 12

Thread: OPEX Friday - January 20, 2012

  1. #11
    Quote Originally Posted by TraderD View Post
    Indeed, it's the conditions of "selling into strength" that I'm questioning with respect to the anecdotal evidence presented by the current GDX trade. I am not sure what may constitute a better exit condition, merely speculating that somehow factoring the-then prevailing forward-LT/ST stats may prove useful in testing.

    As a side note, Figure 3 in the GDX doc shows the vast portion of return (GDX equity curve on a log-Y scale) to be produced between Aug 2008 and April 2009, a rather narrow and unique period of time, which may make judgment of the rest of the ~4yr test period somewhat challenging.
    You are right about the 2008-2009 period. I copied below the figure that is in the original GDX reference document.

    In blue, you can see the returns of the previous version of the model. These returns were negative in 2011. It is more critical, because it means that the earlier version of the model was good for trend following, but not that good in a choppy market such as last year. Hence, the new version, in which we introduced a sell-in-strength rule, is showing good results in the two types of markets.

    However, you are right: we will not have again the same sort of results as in 2008-2009.
    However, I am pretty confident that the model is now robust enough to sustain any type of market.

    There are two other aspects that are important (to my eyes) about this model:

    1. At each stage, we can measure in real time the distance to the next signal and issue an e-mail alert (at least we are/will be testing these features.) This means "user's freedom". We (you) will be able to walk around with an iphone, get alerts on line in real-time and act if necessary. The heavy work is done by the model and the computers. We'd issue about two alerts per months/per model, which are easy to handle.
    2. I was able to extract from the model the parameters that are sector dependent: Stop level, overbought and oversold levels, ATR level, porosity. These parameters are depending only on the volatility of a specific sector. This means that it should be possible to test the same model on different sectors without the need to redevelop the whole logic: only sector specific parameters would be adapted. This means that we will be able to easily run models on different ETFs.

    In conclusion, what we will be offering here is
    - number crunching capabilities that few hedge funds can get
    - back tested/safe trading models
    - the freedom NOT to have to stay close to the computer.

    In a broad sense, this is our goal.


    Pascal

    Name:  GDX.gif
Views: 1373
Size:  32.1 KB
    Last edited by Pascal; 01-20-2012 at 11:37 AM.

  2. #12
    Join Date
    May 2011
    Location
    South Florida
    Posts
    51
    Quote Originally Posted by Pascal View Post
    You are right about the 2008-2009 period. I copied below the figure that is in the original GDX reference document.

    In blue, you can see the returns of the previous version of the model. These returns were negative in 2011. It is more critical, because it means that the earlier version of the model was good for trend following, but not that good in a choppy market such as last year. Hence, the new version, in which we introduced a sell-in-strength rule, is showing good results in the two types of markets.

    However, you are right: we will not have again the same sort of results as in 2008-2009.
    However, I am pretty confident that the model is now robust enough to sustain any type of market.

    There are two other aspects that are important (to my eyes) about this model:

    1. At each stage, we can measure in real time the distance to the next signal and issue an e-mail alert (at least we are/will be testing these features.) This means "user's freedom". We (you) will be able to walk around with an iphone, get alerts on line in real-time and act if necessary. The heavy work is done by the model and the computers. We'd issue about two alerts per months/per model, which are easy to handle.
    2. I was able to extract from the model the parameters that are sector dependent: Stop level, overbought and oversold levels, ATR level, porosity. These parameters are depending only on the volatility of a specific sector. This means that it should be possible to test the same model on different sectors without the need to redevelop the whole logic: only sector specific parameters would be adapted. This means that we will be able to easily run models on different ETFs.

    In conclusion, what we will be offering here is
    - number crunching capabilities that few hedge funds can get
    - back tested/safe trading models
    - the freedom NOT to have to stay close to the computer.

    In a broad sense, this is our goal.

    Pascal
    A big part of the challenge in developing quantitative trading models is reducing the number of parameters used (aka the dimensionality curse) in order to avoid model overfitting to available (and often limited) data while maximizing model accuracy. I find the notion of a single model that can apply to multiple sectors particularly promising since it provides for a more diverse test data set while minimizing parameter inflation through a consistent approach to parameter values selection for each sector.

    Trader D

+ Reply to Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts