+ Reply to Thread
Page 1 of 11 1 11 LastLast
Results 1 to 10 of 108

Thread: Model discussion

  1. #1

    Model discussion

    I am opening this sticky thread to discuss the various aspects of the different models used in the EV system.

    If you have ideas or suggestions, this is here that they should be posted for future reference.

    Below is a document describing the models, their strength, weakness and possible improvements.

    This is also here that I will post results of back-tests on the different models, whenever improvements are realized.

    discussion on the EV models.doc


    Pascal

  2. #2
    Join Date
    May 2011
    Location
    South Florida
    Posts
    51
    Pascal, this is a great initiative to harness the collective skills and creativity of members toward model improvements.

    The establishing a robust oversold state of the lower panel sector-based oscillator is tricky. Making the indicator more "adaptive" than it already is (e.g. by normalizing with respect to recent market volatility or MF volatility or even oscillator range extremes) sounds sensible and is probably worth trying. However, any type of such normalization would be based on historical behavior and not necessarily represent the true reason why the indicator turned up prior to crossing the -70 threshold. Specifically to Dec 20 reversal, it feels as if a major intervention (stealth QE?) aborted what would have likely been a more common descent into deep oversold territory. If that is the case (however rare or frequent the occasion), making the indicator more adaptive is not likely to help and it points to a more fundamental deficiency of using an oscillator+threshold as a finite state machine for MDM decision making. In that view, it may be useful to look at alternative mechanisms to serve as OS state detectors.

    Trader D

  3. #3
    Join Date
    Dec 1969
    Location
    Montreal Quebec Canada
    Posts
    55
    Pascal,

    Although it might prove interesting to include evolving measure(s) of volatility to improvement the Market timing model, my intuition tells me that it won't make a great difference. Absolute non volatility measures of OB/OS do a very good job of estimating value (RSI & STO are absolute measures).

    In my humble opinion, the Model needs to be confronted with a new independent variable in what could be a multiple factor model. Again, one of your friend's Billy indicator comes to mind: MA[$TICK] (600 minutes) slope would do a good job of measuring "the trend" and it is independent because it has nothing to do with volume analysis.

    But the point here is not the variable to pick given that you are in a better position that any of us to figure out the best one(s) to select, but the addition of other variables.

    Pierre Brodeur

  4. #4
    Join Date
    Dec 1969
    Location
    Long Island, New York
    Posts
    515

    Lower the target and widen the base

    Just a generic observation: models generally run into trouble when they fine tune TOO much. Lowering the target and increasing the number of Robots might solve the problem. That is, instead of trying to squeeze the most out of the generic market, why not try to squeeze a little out of the juiciest ETFs?

    The 9 sector ETFs I use in my own model are very highly traded, and I know you have a better model to tweak them with.

  5. #5
    Join Date
    May 2011
    Location
    South Florida
    Posts
    51
    Quote Originally Posted by Pierre Brodeur View Post
    Pascal,
    Although it might prove interesting to include evolving measure(s) of volatility to improvement the Market timing model, my intuition tells me that it won't make a great difference. Absolute non volatility measures of OB/OS do a very good job of estimating value (RSI & STO are absolute measures).
    Pierre Brodeur
    An absolute measure of an OB/OS oscillator requires an absolute numeric threshold to be used. Since it's unrealistic (and often unwieldy) to expect the chosen threshold to always be right (ie high Win%), two other goals are typically preferred:
    (1) Win a lot when you're right and lose a little when you're wrong (ie high gain ratio, which leads to high PF)
    (2) Make the threshold choice such that performance isn't overly sensitive to slight changes of threshold value

    The problem with OS threshold misses is that they inevitably lead to a large loss in the form of a string of mis-directed trades (repeated attempts to re-short an uptrend instead of being in buy mode). Only testing can check whether requirement #2 above holds with a choice of -70. My gut feeling is that this could be a problem without use of a more relaxed direction determinant, possibly involving another independent indicator.

    Trader D

  6. #6
    Join Date
    Dec 1969
    Location
    Montreal Quebec Canada
    Posts
    55
    Quote Originally Posted by TraderD View Post
    (2) Make the threshold choice such that performance isn't overly sensitive to slight changes of threshold value

    The problem with OS threshold misses is that they inevitably lead to a large loss in the form of a string of mis-directed trades (repeated attempts to re-short an uptrend instead of being in buy mode). Only testing can check whether requirement #2 above holds with a choice of -70.
    Trader D
    No indicator is perfect, of course and as traders we have the luxury of being able to put any indicator in the current market dynamics context which many model have difficulty doing. That is why many modern models (especially risk models) have volatility regime adjustments to calibrate factor volatilities to current market levels. But that is a very difficult thing to do if only because risk is non linear. Others use nonstationary stochastic processes (ARCH & GARCH models) in order to deal with the fact that model coefficient do change over time. However, RSI for example has the advantage of being between 0% and 100% and thus if this would reflect the distribution ( normal or otherwise) of historical value (OB/OS) it would be a major improvement over the absolute indicators currently available or the current arbitrary OB/OS hard coded numbers (i.e.: 70) currently being used by Pascal.

    Quote Originally Posted by TraderD View Post
    My gut feeling is that this could be a problem without use of a more relaxed direction determinant, possibly involving another independent indicator.
    Trader D
    I believe we are in agreement on the need for another independent indicator.
    Last edited by Pierre Brodeur; 02-06-2012 at 11:01 PM.

  7. #7
    Join Date
    May 2011
    Location
    New Zealand
    Posts
    45
    Here's my opinions

    1. 20DMF
    --> agreed with Timothy, increased the numbers of ETF the robots trade, for example, when most ETF is in buy mode, the remaining neutral mode ETF cannot take short term short entry (a failed safe mechanism), alternatively, when most ETF is in short mode, the remaining neutral mode ETF cannot take short term long entry


    2. The GDX MF

    Model Weaknesses

    • The model acts very fast on a signal change but might be prone to whipsaws, mostly due to the underlying volatility.
    --> it happened in Dec last year, does it really due to volatility or does it due to the low volume because of holiday? Or, volatility due to the low volume?


    3. The Robots

    • Update the statistical trading tables for both robots
    --> is it possible to automate the process for the statistical trading tables?


    4. RT On-going development work

    --> sms alert (can be done via twitter)

    5. Sector Rotation (SR) trading model

    --> can we use GDX MF model to run on each different sectors? Then we don't need to worry about the stock.

    --> I think real time system will be v. useful to provide a better entry and tight stop but we need an real time alert system.

  8. #8
    Join Date
    May 2011
    Location
    South Florida
    Posts
    51
    Quote Originally Posted by Pierre Brodeur View Post
    No indicator is perfect, of course and as traders we have the luxury of being able to put any indicator in the current market dynamics context which many model have difficulty doing. That is why many modern models (especially risk models) have volatility regime adjustments to calibrate factor volatilities to current market levels. But that is a very difficult thing to do if only because risk is non linear. Others use nonstationary stochastic processes (ARCH & GARCH models) in order to deal with the fact that model coefficient do change over time. However, RSI for example has the advantage of being between 0% and 100% and thus if this would reflect the distribution ( normal or otherwise) of historical value (OB/OS) it would be a major improvement over the absolute indicators currently available or the current arbitrary OB/OS hard coded numbers (i.e.: 70) currently being used by Pascal.
    IIUC, portfolio factor calibration, unlike trading with stops, is prone to occasional black swans where the fat tail of the distribution isn't properly captured by the model. Add leverage to that with a quest for market-beating returns and you got a prescription for a blow up. I'm not sure I see the benefit of RSI over the current OB/OS oscillator, which is also bounded by a fixed range (-100,+100). Is it the non-linear mapping? RSI would also need to utilize threshold choices (typically 30/70).

    Trader D

  9. #9
    The biggest challenge we have is a very short history of data available for backtesting; any idea that will resolve the few occurrences where the model broke down cannot be confirmed with statistical confidence. For this reason, the 20DMF had a couple of tweaks in the past. Were those optimal? We shall probably know only in many years from now (n fact, we trust the model because it makes fundamental sense, not because of a thorough out of sample statistical validation – for this we do not have sufficient data points. Adaptive OB/OS determination makes sense and it might be more robust than an arbitrary level. Maybe.
    So what can one do? On a conceptual level: (a) keep the model as simple as possible – the less parameters, decisions and ‘knobs’ there are - the less brittle it will be, (b) introduce additional data points of a somewhat different ilk by incorporating other indicators (for decision, confirmation, or vote), and (c) use several systems/robots for diversification. Well, all ideas mentioned earlier in this thread :-)
    On a practical level – there are a number of ‘breadth related’ indicators that could be used quite effectively to (i) identify bottoms - maybe combine with 20DMF in some voting mechanism, and (ii) identify a bullish state of the market – to get the 20DMF out of a ‘neutral’ state, and/or be used together either in a voting, or an allocation mechanism.
    By ‘breadth related’ indicators I mean things like: new highs/new lows, volume or issues advance/decline, TICK, TRIN (Arms), number of issues over or crossing a moving average. These days the data for these indicators can be easily accessed in real time - see my comment in the Tradestation thread.
    Breadth models try to measure the underlying happenings in the market as the MF does, though in a different way - and they can be used to create good timing models on their own. There is a good possibility that combining them with 20DMF will increase the model's robustness.

  10. #10
    While at it, a couple of general thoughts:

    - Whipsaws: If the losses on whipsaws are small, best to look at those just as the cost of doing business. Many systems whipsawed in the huge volatility of last year much more than in the last twenty years, simply bringing out the reality of the market... politicians and central banks flip-flopping.

    - When modifying / tweaking a model, it is a good idea to continue and maintain full data series (past and future) of both versions, not just discard the old one. In my experience one can still learn from systems abandoned many years ago.

    - We are very interested in the Maximum Drawdown of a backtest (and more than that, of a system we trade live). It is also important to understand that the MDD does not represent well the statistics of a trading system output (it is the outcome of a specific path in time, out of many that could have happened). Therefore, MDD is not a good predictor of a system’s future drawdown and is not a good measure for a system’s risk. The saying “your worst drawdown did not happen yet” has indeed a theoretical basis. When comparing different versions of a system in development - it is much better to use measures with more statistical contents, like a rolling period downward deviation.

+ Reply to Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts