+ Reply to Thread
Page 4 of 11 FirstFirst 1 4 11 LastLast
Results 31 to 40 of 108

Thread: Model discussion

  1. #31
    Join Date
    Dec 1969
    Location
    Kalmthout, Belgium
    Posts
    35
    Quote Originally Posted by pdp-brugge View Post
    Following the discussion in this thread I have the feeling that I personally would like to follow the track of selecting a number of stocks from the best sectors that show the best AB/LER combination.
    Just my personal oppinion but I wouldn't complicate the robot by selecting individual stocks. Index ETF's are nice and easy. No worries about liquidity, earnings, diversification etc.

  2. #32
    Join Date
    Oct 2011
    Location
    Brugge-Belgium
    Posts
    394
    Hi Rembert,

    It is not my intention to suggest that the robot would use individualstocks.
    I am considering to use, beside trading the robots, alsotrading a discretional system.
    This discretional system would use stock picking according theEV data.

    Regards

    PdP

  3. #33
    Join Date
    Dec 1969
    Location
    Kalmthout, Belgium
    Posts
    35
    Ah ok, I understand. Besides a couple of ETF robots I also trade individual stocks discretionary but I don't use any EV concepts for those.

  4. #34
    Quote Originally Posted by Rembert View Post
    Regarding Fuzzy Logic ... instead of passing a long/short/neutral signal to the robot the 20DMF could perhaps pass a parameter ranging from -100 to 100 with 0 being the most neutral. The robot could then use this parameter in it's decision making process. How that parameter is calculated and how the robot would use it is another matter of course.
    This is what I was talking about:
    http://www.lotsofessays.com/viewpaper/1690480.html

    Googling "fuzzy logic and stock market" gives lots of results. Just an idea anyway.

  5. #35
    Join Date
    Dec 1969
    Location
    Vienna, Virginia
    Posts
    603
    Quote Originally Posted by Adriano View Post
    This is what I was talking about:
    http://www.lotsofessays.com/viewpaper/1690480.html

    Googling "fuzzy logic and stock market" gives lots of results. Just an idea anyway.
    I have the MATLAB fuzzy logic toolbox and have played with fuzzy math off and on for many years. I'm no expert, but I understand the fundamentals enough to poke around.

    The greatest challenge for me is backtesting a fuzzy system. I find it difficult to create a test harness (e.g., known stimulus as the input with predictable output). Without this, I have little confidence in what is considered "normal" behavior versus what is considered outside the normal distribution. While I think fuzzy logic can have a place, especially the "porosity" factors that we employ here, I've never been able to build a winning system based on fuzzy math alone.

    In discussion of this with a math guru who uses fuzzy systems with control systems, if we view our trading system as a self-contained entity, we have to have some confidence that the manipulations that we do on the data result in a stable system, e.g., one that won't take our equity to the ground (drawdown) with an expectation that we'll achieve higher gains. These constraints are valid, but they steer the system towards standard Euclidean logic and away from the fuzziness that we're intending.

    The correct answer is probably somewhere in the middle of both models, but again, without a robust way of testing, it's hard to make the jump with real monies.

    As an aside, GGT handles this situation in a different manner. While I maximize on equity to derive a set of coefficients that describe optimal moving averages and rates of change, I "lop off" the top of the equity mountain and try to maximize the area of the plateau where the outside conditions (market variables) do not dramatically change the optimal solution.

    Think of it this way ... you have two variables, EMA1 and EMA2. For a given stock price series over the past 2 years, there is a unique combination of EMA1 and EMA2 values which maximize the equity of that system. We could pick EMA1 and EMA2 and use those values, but if the market moves just a tad against us, we could see our equity drop off FAST. This situation would exist if there was a gradual slope in the equity curve as EMA1 was held constant and EMA2 was varied to produce the maximum. If EMA2 goes too far, we could see a "drop off the equity cliff". This sensitivity is very dangerous to our portfolio, and it is why most systems do not work well with crossing MAs.

    Instead, ask yourself how much of the mountain top can you "lop off" flat so that a marble rolling around on this new plateau does not "fall off". Of course, you could "lop off" everything until the marble is on flat ground with everything around -- it will never "fall off" the plateau, but then again, you're not making money. But you could "lop off" enough of the mountain to keep you on a higher plateau than any surrounding peak -- and now you're more stable to market conditions if the "optimal" EMA1 and EMA2 are adjusted to the geometric center of this plateau.

    This is more or less what GGT attempts to do, and perhaps there is a lesson here for the model here. Not all stocks/ETFs in the GGT system have a solution that is robust -- this is what the metrics on my sheet tell me, but for many, they behave very well.

    The GGT coefficients are updating 24/7, and every week about 15%-20% of the stock database receives updated numbers (sometimes they change, sometimes they do not), and about 25% of the ETFs get new values. This keeps the backtest data window sliding forward ever week on a new basket of stocks, so that the optimization does not get too far from reality.

    Food for thought ...

    Regards,

    pgd

  6. #36
    Quote Originally Posted by grems8544 View Post
    I have the MATLAB fuzzy logic toolbox and have played with fuzzy math off and on for many years. I'm no expert, but I understand the fundamentals enough to poke around.

    The greatest challenge for me is backtesting a fuzzy system. I find it difficult to create a test harness (e.g., known stimulus as the input with predictable output). Without this, I have little confidence in what is considered "normal" behavior versus what is considered outside the normal distribution. While I think fuzzy logic can have a place, especially the "porosity" factors that we employ here, I've never been able to build a winning system based on fuzzy math alone.

    In discussion of this with a math guru who uses fuzzy systems with control systems, if we view our trading system as a self-contained entity, we have to have some confidence that the manipulations that we do on the data result in a stable system, e.g., one that won't take our equity to the ground (drawdown) with an expectation that we'll achieve higher gains. These constraints are valid, but they steer the system towards standard Euclidean logic and away from the fuzziness that we're intending.

    The correct answer is probably somewhere in the middle of both models, but again, without a robust way of testing, it's hard to make the jump with real monies.
    Interesting, thanks. I know MATLAB and I think it's fantastic, but honestly I never used the fuzzy logic toolbox, only the image processing toolbox and a little bit of the neural nets stuff. With NN I also faced a somehow similar problem some years ago, but that was for an abstract animation/electronic sound piece, nothing to do with financial stuff. I agree that the porosity issue should be handled well by fuzzy logic and I wish I could give a more real solution, I just don't have the math knowledge to do it.

    Quote Originally Posted by grems8544 View Post
    As an aside, GGT handles this situation in a different manner. While I maximize on equity to derive a set of coefficients that describe optimal moving averages and rates of change, I "lop off" the top of the equity mountain and try to maximize the area of the plateau where the outside conditions (market variables) do not dramatically change the optimal solution.

    Think of it this way ... you have two variables, EMA1 and EMA2. For a given stock price series over the past 2 years, there is a unique combination of EMA1 and EMA2 values which maximize the equity of that system. We could pick EMA1 and EMA2 and use those values, but if the market moves just a tad against us, we could see our equity drop off FAST. This situation would exist if there was a gradual slope in the equity curve as EMA1 was held constant and EMA2 was varied to produce the maximum. If EMA2 goes too far, we could see a "drop off the equity cliff". This sensitivity is very dangerous to our portfolio, and it is why most systems do not work well with crossing MAs.

    Instead, ask yourself how much of the mountain top can you "lop off" flat so that a marble rolling around on this new plateau does not "fall off". Of course, you could "lop off" everything until the marble is on flat ground with everything around -- it will never "fall off" the plateau, but then again, you're not making money. But you could "lop off" enough of the mountain to keep you on a higher plateau than any surrounding peak -- and now you're more stable to market conditions if the "optimal" EMA1 and EMA2 are adjusted to the geometric center of this plateau.

    This is more or less what GGT attempts to do, and perhaps there is a lesson here for the model here. Not all stocks/ETFs in the GGT system have a solution that is robust -- this is what the metrics on my sheet tell me, but for many, they behave very well.

    The GGT coefficients are updating 24/7, and every week about 15%-20% of the stock database receives updated numbers (sometimes they change, sometimes they do not), and about 25% of the ETFs get new values. This keeps the backtest data window sliding forward ever week on a new basket of stocks, so that the optimization does not get too far from reality.

    Food for thought ...

    Regards,

    pgd
    Yes, I know the peak/plateau issue, certainly peak values are not reliable. I also update myself some parameters of the four trading systems I use, two of them being the VIT robots. I do that with AmiBroker, before placing a new trade to get the best position sizing values. I don't trade stocks at the moment, so this makes it easier for me.

    Regards,
    Adriano

  7. #37

    XL Models

    Over the week-end and last week, I have applied the GDX model to a few industry groups: XLE, XLI, XLK, XLU and SPY.

    The results are below:

    Name:  Models1.gif
Views: 858
Size:  18.6 KB

    First, let me say that this is "on-going work". There is still further analysis work to be made in regards to:
    - Draw downs
    - Trade statistics
    - Correlation
    - Stock selection within each industry group
    - Complete these tables with XLB, XLY, XLP, XLV

    What I did was simply to use the OB/OS GDX MF model and apply it "as is" to the different industry group.
    The OB/OS and the porosity levels are automatically adapted (using only past but more recent data than very old past data).

    Let me already comment on the first results:

    1. In blue, I highlighted the positive returns for 2008 on the 20DMF, SPY and GDX models groups, while the four other groups were negative. There is one reason for this: data! For the four XL groups, I indeed took the group's composition/weight as of today and applied it down to 2008. However, in 2008/2009, there were 63 changes in the S&P500, 16 in 2010 and 12 in 2011. With the weights also changing, this means that the older the data, the less reliable the results will be. For the S&P500, I manually kept track of all the past changes (I did not do that for the underlying groups.) I also think that for GDX, the index has been very stable since many years with the larger stocks taking the "bulk" of the index: ABX, GG, KGC, SLW, etc...

    2. Because of this and also because 2008/2009 were really exceptional trending years, I prefer to concentrate on the results for the past two years, shown in yellow and green. The yellow represents the total of the past two years, while the green color highlights the difference between the model and the corresponding ETF return.

    We can see that:
    2.A. The model works well for XLK, XLI, XLE and less so for SPY and XLU. I understand that it would not work well for SPY, which involves all the sectors, while each sub-group is more focused and hence, the movements of money are easier to detect when we analyse each group separately. This however does not explain the fact that the model does not act so well with XLU. It might be a lack of volatility in this sub-group, but I had no time to study trades in detail to point this out.
    2.B. We should also note that XLU ETF acted well in 2011, while all other sectors were poor. We indeed had a defensive market in 2011. However, the model could take advantage of the groups' inherent volatility.
    2.C. GDX offered the strongest returns. This is also due to the higher volatility and waves of changes that is a characteristic of this sector.

    3.For the past two years, the XLE model did better than the 20DMF and the XLI model did almost as good as the 20DMF. This means that there is something to dig around here with probably the possibility to rotate between industry groups independently from the 20DMF itself.


    Pascal
    Last edited by Pascal; 02-13-2012 at 04:28 AM.

  8. #38
    Join Date
    May 2011
    Location
    South Florida
    Posts
    51
    Quote Originally Posted by Pascal View Post
    2. Because of this and also because 2008/2009 were really exceptional trending years, I prefer to concentrate on the results for the past two years, shown in yellow and green. The yellow represents the total of the past two years, while the green color highlights the difference between the model and the corresponding ETF return.
    Pascal
    Pascal, this looks like a great start and it would be interesting to see what the detailed trade stats look like for the various ETFs. I believe that 2008-2011 is a unique diverse combination of trending and range-bound market environments and it would be important to look at all 4 years when scrutinizing performance. My crystal ball that often acts as a reliable inverse indicator says we won't see 2008/9 again, which makes me believe that we absolutely should keep an eye on how the models behaved during that time.

    Trader D

  9. #39

    XL Models, continuation

    These are the results of the same model applied to the SP500 components.

    You will note that the average return is doing better than the SPY return. This simple fact shows that it must be possible at any time to select the three or four best ETFs and trade in/out of a rotation of the best ETFs.
    This means: managing a portfolio on these ETFs should produce better returns than trading SPY.

    If someone volunteers to do the work, I can prepare a file of the signals evolution for all the ETFs starting in 2007 (Just send me a private mail.)

    I still need to analyse all the trades, which I am sure will point out to some model weaknesses and further improvements. Once this analysis is completed, then we will quickly have the RT graphs for these ETFs with the corresponding trading signals/distance to the next signal.



    Pascal

    Name:  Models2.gif
Views: 333
Size:  18.6 KB

  10. #40
    Join Date
    May 2011
    Location
    South Florida
    Posts
    51
    Quote Originally Posted by Pascal View Post
    These are the results of the same model applied to the SP500 components.
    You will note that the average return is doing better than the SPY return. This simple fact shows that it must be possible at any time to select the three or four best ETFs and trade in/out of a rotation of the best ETFs.
    This means: managing a portfolio on these ETFs should produce better returns than trading SPY.
    I still need to analyse all the trades, which I am sure will point out to some model weaknesses and further improvements. Once this analysis is completed, then we will quickly have the RT graphs for these ETFs with the corresponding trading signals/distance to the next signal.
    Pascal
    2008 stats look particularly intriguing here. For example, assuming the model is roughly symmetric in its long/short settings, why would it perform much poorer (on most ETFs) in 2008 (down trending) than in 2009 (up trending)? I would guess a closer look at the trades is needed to answer that. Some candidates for inspection: volatility threshold (criterion for short entry), trade duration, etc.

    Trader D

+ Reply to Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts