The biggest challenge we have is a very short history of data available for backtesting; any idea that will resolve the few occurrences where the model broke down cannot be confirmed with statistical confidence. For this reason, the 20DMF had a couple of tweaks in the past. Were those optimal? We shall probably know only in many years from now (n fact, we trust the model because it makes fundamental sense, not because of a thorough out of sample statistical validation – for this we do not have sufficient data points. Adaptive OB/OS determination makes sense and it might be more robust than an arbitrary level. Maybe.
So what can one do? On a conceptual level: (a) keep the model as simple as possible – the less parameters, decisions and ‘knobs’ there are - the less brittle it will be, (b) introduce additional data points of a somewhat different ilk by incorporating other indicators (for decision, confirmation, or vote), and (c) use several systems/robots for diversification. Well, all ideas mentioned earlier in this thread :-)
On a practical level – there are a number of ‘breadth related’ indicators that could be used quite effectively to (i) identify bottoms - maybe combine with 20DMF in some voting mechanism, and (ii) identify a bullish state of the market – to get the 20DMF out of a ‘neutral’ state, and/or be used together either in a voting, or an allocation mechanism.
By ‘breadth related’ indicators I mean things like: new highs/new lows, volume or issues advance/decline, TICK, TRIN (Arms), number of issues over or crossing a moving average. These days the data for these indicators can be easily accessed in real time - see my comment in the Tradestation thread.
Breadth models try to measure the underlying happenings in the market as the MF does, though in a different way - and they can be used to create good timing models on their own. There is a good possibility that combining them with 20DMF will increase the model's robustness.