PDA

View Full Version : Model discussion



Pascal
02-06-2012, 04:35 AM
I am opening this sticky thread to discuss the various aspects of the different models used in the EV system.

If you have ideas or suggestions, this is here that they should be posted for future reference.

Below is a document describing the models, their strength, weakness and possible improvements.

This is also here that I will post results of back-tests on the different models, whenever improvements are realized.

12640


Pascal

TraderD
02-06-2012, 08:42 AM
Pascal, this is a great initiative to harness the collective skills and creativity of members toward model improvements.

The establishing a robust oversold state of the lower panel sector-based oscillator is tricky. Making the indicator more "adaptive" than it already is (e.g. by normalizing with respect to recent market volatility or MF volatility or even oscillator range extremes) sounds sensible and is probably worth trying. However, any type of such normalization would be based on historical behavior and not necessarily represent the true reason why the indicator turned up prior to crossing the -70 threshold. Specifically to Dec 20 reversal, it feels as if a major intervention (stealth QE?) aborted what would have likely been a more common descent into deep oversold territory. If that is the case (however rare or frequent the occasion), making the indicator more adaptive is not likely to help and it points to a more fundamental deficiency of using an oscillator+threshold as a finite state machine for MDM decision making. In that view, it may be useful to look at alternative mechanisms to serve as OS state detectors.

Trader D

Pierre Brodeur
02-06-2012, 03:06 PM
Pascal,

Although it might prove interesting to include evolving measure(s) of volatility to improvement the Market timing model, my intuition tells me that it won't make a great difference. Absolute non volatility measures of OB/OS do a very good job of estimating value (RSI & STO are absolute measures).

In my humble opinion, the Model needs to be confronted with a new independent variable in what could be a multiple factor model. Again, one of your friend's Billy indicator comes to mind: MA[$TICK] (600 minutes) slope would do a good job of measuring "the trend" and it is independent because it has nothing to do with volume analysis.

But the point here is not the variable to pick given that you are in a better position that any of us to figure out the best one(s) to select, but the addition of other variables.

Pierre Brodeur

Timothy Clontz
02-06-2012, 04:00 PM
Just a generic observation: models generally run into trouble when they fine tune TOO much. Lowering the target and increasing the number of Robots might solve the problem. That is, instead of trying to squeeze the most out of the generic market, why not try to squeeze a little out of the juiciest ETFs?

The 9 sector ETFs I use in my own model are very highly traded, and I know you have a better model to tweak them with.

TraderD
02-06-2012, 05:18 PM
Pascal,
Although it might prove interesting to include evolving measure(s) of volatility to improvement the Market timing model, my intuition tells me that it won't make a great difference. Absolute non volatility measures of OB/OS do a very good job of estimating value (RSI & STO are absolute measures).
Pierre Brodeur

An absolute measure of an OB/OS oscillator requires an absolute numeric threshold to be used. Since it's unrealistic (and often unwieldy) to expect the chosen threshold to always be right (ie high Win%), two other goals are typically preferred:
(1) Win a lot when you're right and lose a little when you're wrong (ie high gain ratio, which leads to high PF)
(2) Make the threshold choice such that performance isn't overly sensitive to slight changes of threshold value

The problem with OS threshold misses is that they inevitably lead to a large loss in the form of a string of mis-directed trades (repeated attempts to re-short an uptrend instead of being in buy mode). Only testing can check whether requirement #2 above holds with a choice of -70. My gut feeling is that this could be a problem without use of a more relaxed direction determinant, possibly involving another independent indicator.

Trader D

Pierre Brodeur
02-06-2012, 10:50 PM
(2) Make the threshold choice such that performance isn't overly sensitive to slight changes of threshold value

The problem with OS threshold misses is that they inevitably lead to a large loss in the form of a string of mis-directed trades (repeated attempts to re-short an uptrend instead of being in buy mode). Only testing can check whether requirement #2 above holds with a choice of -70.
Trader D

No indicator is perfect, of course and as traders we have the luxury of being able to put any indicator in the current market dynamics context which many model have difficulty doing. That is why many modern models (especially risk models) have volatility regime adjustments to calibrate factor volatilities to current market levels. But that is a very difficult thing to do if only because risk is non linear. Others use nonstationary stochastic processes (ARCH & GARCH models) in order to deal with the fact that model coefficient do change over time. However, RSI for example has the advantage of being between 0% and 100% and thus if this would reflect the distribution ( normal or otherwise) of historical value (OB/OS) it would be a major improvement over the absolute indicators currently available or the current arbitrary OB/OS hard coded numbers (i.e.: 70) currently being used by Pascal.



My gut feeling is that this could be a problem without use of a more relaxed direction determinant, possibly involving another independent indicator.
Trader D

I believe we are in agreement on the need for another independent indicator.

mingpan.lam
02-07-2012, 07:04 AM
Here's my opinions

1. 20DMF
--> agreed with Timothy, increased the numbers of ETF the robots trade, for example, when most ETF is in buy mode, the remaining neutral mode ETF cannot take short term short entry (a failed safe mechanism), alternatively, when most ETF is in short mode, the remaining neutral mode ETF cannot take short term long entry


2. The GDX MF

Model Weaknesses

• The model acts very fast on a signal change but might be prone to whipsaws, mostly due to the underlying volatility.
--> it happened in Dec last year, does it really due to volatility or does it due to the low volume because of holiday? Or, volatility due to the low volume?


3. The Robots

• Update the statistical trading tables for both robots
--> is it possible to automate the process for the statistical trading tables?


4. RT On-going development work

--> sms alert (can be done via twitter)

5. Sector Rotation (SR) trading model

--> can we use GDX MF model to run on each different sectors? Then we don't need to worry about the stock.

--> I think real time system will be v. useful to provide a better entry and tight stop but we need an real time alert system.

TraderD
02-07-2012, 07:58 AM
No indicator is perfect, of course and as traders we have the luxury of being able to put any indicator in the current market dynamics context which many model have difficulty doing. That is why many modern models (especially risk models) have volatility regime adjustments to calibrate factor volatilities to current market levels. But that is a very difficult thing to do if only because risk is non linear. Others use nonstationary stochastic processes (ARCH & GARCH models) in order to deal with the fact that model coefficient do change over time. However, RSI for example has the advantage of being between 0% and 100% and thus if this would reflect the distribution ( normal or otherwise) of historical value (OB/OS) it would be a major improvement over the absolute indicators currently available or the current arbitrary OB/OS hard coded numbers (i.e.: 70) currently being used by Pascal.

IIUC, portfolio factor calibration, unlike trading with stops, is prone to occasional black swans where the fat tail of the distribution isn't properly captured by the model. Add leverage to that with a quest for market-beating returns and you got a prescription for a blow up. I'm not sure I see the benefit of RSI over the current OB/OS oscillator, which is also bounded by a fixed range (-100,+100). Is it the non-linear mapping? RSI would also need to utilize threshold choices (typically 30/70).

Trader D

senco
02-07-2012, 08:21 AM
The biggest challenge we have is a very short history of data available for backtesting; any idea that will resolve the few occurrences where the model broke down cannot be confirmed with statistical confidence. For this reason, the 20DMF had a couple of tweaks in the past. Were those optimal? We shall probably know only in many years from now (n fact, we trust the model because it makes fundamental sense, not because of a thorough out of sample statistical validation – for this we do not have sufficient data points. Adaptive OB/OS determination makes sense and it might be more robust than an arbitrary level. Maybe.
So what can one do? On a conceptual level: (a) keep the model as simple as possible – the less parameters, decisions and ‘knobs’ there are - the less brittle it will be, (b) introduce additional data points of a somewhat different ilk by incorporating other indicators (for decision, confirmation, or vote), and (c) use several systems/robots for diversification. Well, all ideas mentioned earlier in this thread :-)
On a practical level – there are a number of ‘breadth related’ indicators that could be used quite effectively to (i) identify bottoms - maybe combine with 20DMF in some voting mechanism, and (ii) identify a bullish state of the market – to get the 20DMF out of a ‘neutral’ state, and/or be used together either in a voting, or an allocation mechanism.
By ‘breadth related’ indicators I mean things like: new highs/new lows, volume or issues advance/decline, TICK, TRIN (Arms), number of issues over or crossing a moving average. These days the data for these indicators can be easily accessed in real time - see my comment in the Tradestation thread.
Breadth models try to measure the underlying happenings in the market as the MF does, though in a different way - and they can be used to create good timing models on their own. There is a good possibility that combining them with 20DMF will increase the model's robustness.

senco
02-07-2012, 08:38 AM
While at it, a couple of general thoughts:

- Whipsaws: If the losses on whipsaws are small, best to look at those just as the cost of doing business. Many systems whipsawed in the huge volatility of last year much more than in the last twenty years, simply bringing out the reality of the market... politicians and central banks flip-flopping.

- When modifying / tweaking a model, it is a good idea to continue and maintain full data series (past and future) of both versions, not just discard the old one. In my experience one can still learn from systems abandoned many years ago.

- We are very interested in the Maximum Drawdown of a backtest (and more than that, of a system we trade live). It is also important to understand that the MDD does not represent well the statistics of a trading system output (it is the outcome of a specific path in time, out of many that could have happened). Therefore, MDD is not a good predictor of a system’s future drawdown and is not a good measure for a system’s risk. The saying “your worst drawdown did not happen yet” has indeed a theoretical basis. When comparing different versions of a system in development - it is much better to use measures with more statistical contents, like a rolling period downward deviation.

TraderD
02-07-2012, 11:29 AM
While at it, a couple of general thoughts:
- Whipsaws: If the losses on whipsaws are small, best to look at those just as the cost of doing business. Many systems whipsawed in the huge volatility of last year much more than in the last twenty years, simply bringing out the reality of the market... politicians and central banks flip-flopping.


AFAICT, EV as a follower of large money can only be as good as they are. 2011 performance of large funds have been rather dismal (an understatement), so it's likely to expect EV to deteriorate in its predictive ability. Paul in a recent GGT post shows a bunch of examples where contrary to LEV divergence, price keeps going in the other direction, which begs the use of this technique with other indicator(s) as you've suggested, or at least a much more accurate understanding where EV does work.



- We are very interested in the Maximum Drawdown of a backtest (and more than that, of a system we trade live). It is also important to understand that the MDD does not represent well the statistics of a trading system output (it is the outcome of a specific path in time, out of many that could have happened). Therefore, MDD is not a good predictor of a system’s future drawdown and is not a good measure for a system’s risk. The saying “your worst drawdown did not happen yet” has indeed a theoretical basis. When comparing different versions of a system in development - it is much better to use measures with more statistical contents, like a rolling period downward deviation.


Fully agree with MDD comment. Do you have a link to RPDD stat calculations?

Pierre Brodeur
02-07-2012, 11:51 AM
I'm not sure I see the benefit of RSI over the current OB/OS oscillator, which is also bounded by a fixed range (-100,+100).
Trader D

I am not suggesting the use of RSI at all. I am using it as an example in this discussion

Pascal
02-07-2012, 11:59 AM
AFAICT, EV as a follower of large money can only be as good as they are. 2011 performance of large funds have been rather dismal (an understatement), so it's likely to expect EV to deteriorate in its predictive ability. Paul in a recent GGT post shows a bunch of examples where contrary to LEV divergence, price keeps going in the other direction, which begs the use of this technique with other indicator(s) as you've suggested, or at least a much more accurate understanding where EV does work.

Fully agree with MDD comment. Do you have a link to RPDD stat calculations?

EV detects an equilibrium and not a force. It statistically detects when money comes in/out. For a single stock, the movements of money will often be opposite to the price moves, because large players use available liquidity to buy/sell. However, when the price is in a trading range or at a turning point, EV will often show what is happening below the surface and what the next move will be. This is why I always use AB/LER as a combination.

The money flow is however more predictive when it is collected by sectors or industry group.

Also, large money does not necessarily mean that these are large funds. It could be some artificial FED liquidity injection. Therefore, I believe that your statement that EV's predictive ability will deteriorate might or might not be true. I however believe that EV still gives early warnings of what large money is doing. A few years ago, I noted that it could give up to one day advance warning. Today, because the level field is pretty high and everybody is running fast computers, the warning time is probably less than a day.

Anyway, I still prefer to know where the money is going than not to know it.


Pascal

Pascal
02-07-2012, 01:35 PM
I am attaching here a general figure of the different models that are in use, but also of a back-test campaign made last week using these models.

We can see that the one level that cannot be traded is the sectors level, simply because these are sectors that I have defined myself and for which there is no instrument. These sectors are mainly used either for the 20DMF or for the stock filters.

12687

Below are the yearly returns of the S&P and the 20DMF. These are returns compounded within one year. If a 20DMF trade is overlapping two years, I separated taking one part in one year and the second in the following year. This way, we can have a better comparison.

12685

The objective of this back-test work was to measure the relevance to trade sector information.
Is it good to buy a sector when the sector issues a buy signal and short it when it issues a short signal?

The results are below. We can see that indeed, sectors trading is better than B/H on the S&P 500.
However, sectors trading in sync with the 20DMF is still better. Unfortunately, "in Sync" does not give better results than the 20DMF itself.

These results are not surprising: they are "in line" with what I had when I did a similar back-test two years ago.
My conclusion at that time was that even when a sector is flashing a buy signal, it still needs to be as close as possible from a 20DMF buy signal. The later we are from that signal, the worst the returns.

12682

Therefore, the next idea was to select only five sectors when the 20DMF issued a signal and then from each of these sectors, get all the stocks AB/LER data and select the five stocks that were showing the best AB/LER combination. The results of such a test are shown below. These are also in line with the results I had two years ago. The table below show us that whatever efforts we can do to select specific stocks, it will be hard to beat a two time leveraged ETF that trades the 20DMF signals. Of course, specific stock trading might lead to lower DD (I did not calculate such DD.)

12686

I however did another test that gives interesting results.
I selected one date in the past and decided to trade either long or short by intervals of 20 days from that date.
this means at day one, I buy the five best sectors and I sell them at day 20.
The five best sectors are those that show the weakest price RS.
On day 20, I again select the five best sectors and buy them.

On exactly the same day, I also short the five sectors that are the most overbought and I sell these positions 20 days later.

The results are shown in the table below.

We can see that this dumb strategy worked well for longs in 2009 and for shorts in 2008.
In 2010 and 2011, it did not work that well.

However, this strategy shows something important: the rotational aspect of the market. It shows that it makes sense to rotate money. This is of course obvious! I still prefer to see it in the data than not seeing it, because this means that it will probably make sense to develop a set of ETFs MF by industry groups and rotate between them.


12683

I will now start working on these industry group MF models.


Pascal

TraderD
02-07-2012, 01:58 PM
...
I will now start working on these industry group MF models.
Pascal

Pascal, how do you explain the (admittedly anecdotal) observation that test results of models that use 20DMF invariably show 2010/2011 performance to be lower (typically much lower) than 2008/2009 performance? Is there a reason to suspect the market is becoming more "efficient" in arbitraging away the edge attributed to money flow rotation?

Trader D

Pascal
02-07-2012, 02:18 PM
Pascal, how do you explain the (admittedly anecdotal) observation that test results of models that use 20DMF invariably show 2010/2011 performance to be lower (typically much lower) than 2008/2009 performance? Is there a reason to suspect the market is becoming more "efficient" in arbitraging away the edge attributed to money flow rotation?

Trader D

Simple: 2008 and 2009 were trending markets. Crash and reversal from a deep bottom.


Pascal

mingpan.lam
02-07-2012, 04:07 PM
Hi Pascal,

Which model GDX robot using in the above table?
Cheers,

Ellis

Pascal
02-07-2012, 05:09 PM
Hi Pascal,

Which model GDX robot using in the above table?
Cheers,

Ellis

The second level: Industry group

pascal

xr-3609
02-07-2012, 05:34 PM
WOW! Great work! It looks like we need a new instrument to trade the EV Sectors......maybe a "synthetic EV ETF" based on the best five sectors/stocks when a 20DMF signal comes down (or up)!

Greg

Pascal
02-08-2012, 12:39 AM
While at it, a couple of general thoughts:

- Whipsaws: If the losses on whipsaws are small, best to look at those just as the cost of doing business. Many systems whipsawed in the huge volatility of last year much more than in the last twenty years, simply bringing out the reality of the market... politicians and central banks flip-flopping.



Whipsaws is a clear limitation in the EV based models and must be dealt with with great care.
Indeed, we have seen the EV often moves in a direction opposite to price on single stocks, because large players would take advantage of higher liquidity to buy/sell positions contrary to the price move. However, when a stock/Sector is on its lower boundary or in oversold, then a bounce in EV might indicate that there is real accumulation because the stock/sector is "cheap".

Hence, when I applied the OB/OS MF model to the 96 sectors that I defined, I noticed that the return was lower than using the usual simpler sector model to buy and short. The reason was simply that the sectors are in a "hectic" manner when in OB/OS. They will switch up/down until they stabilize and move definitively in a new direction. I believe that this is a factor that is "inherent" to EV , especially on single stocks or on a basket of a few stocks.

However, industry groups and total market level measures are less prone to EV whipsaws, because they do compensate each other. We would need the majority of the stocks in the sector to be bought and then sold the next day to have whipsaws on an industry level. This has fewer chances to occur and hence, OB/OS works better the more stocks you use in the basket.


Pascal

senco
02-08-2012, 02:47 AM
Therefore, the next idea was to select only five sectors when the 20DMF issued a signal and then from each of these sectors, get all the stocks AB/LER data and select the five stocks that were showing the best AB/LER combination. The results of such a test are shown below. These are also in line with the results I had two years ago. The table below show us that whatever efforts we can do to select specific stocks, it will be hard to beat a two time leveraged ETF that trades the 20DMF signals. Of course, specific stock trading might lead to lower DD (I did not calculate such DD.)

- Pascal, could you please clarify: Was it five stocks per sector, or total of five from all sectors? Was it buying when a 20DMF signal is issued, and holding until next short, or something else?

- on a first blush I am not sure I would dismiss that based on comparison to trading a leveraged ETFs. Since we can tailor the amount of leverage we take, It is all a matter of risk-reward; so depending on the downward volatility, the results could be just ho-hum, very good, or spectacular.

In 2011 many sector rotation systems did not work that great, and numbers like in the table are not to sneeze at (especially if it were 25 stocks total). Also seeing better relative performance in 2011 than in 2010 is intriguing. If it were me I would check further whether it is just a matter of beta of stocks - or maybe there is a significant edge here. If you indeed do check the risk (e.g. downward volatility) and it is not higher than the market - it might be worthwhile looking at hedged results, and also at results obtained with a different timing signal gating entry and exit. For diversification, it would be great to identify added value that is not fully correlated to the 20DMF.



.... The five best sectors are those that show the weakest price RS.
... I also short the five sectors that are the most overbought and I sell these positions 20 days later.

- Could you please clarify the specific selection criteria: For longs, is it weak RS only, or you look at money flow as well? The timeframe for RS - is it 20 days? For shorts, what is the definition of 'overbought' in this context? ... I am trying to understand how EV is used here, and whether we are looking at simple mean reversion at the sector level.

I have encountered in the past added value for mean reversion of individual stocks within a strong sector, and for longer timeframe sector momentum; this seems to be quite different and intriguing.

Pascal
02-08-2012, 04:11 AM
Pascal,

Although it might prove interesting to include evolving measure(s) of volatility to improvement the Market timing model, my intuition tells me that it won't make a great difference. Absolute non volatility measures of OB/OS do a very good job of estimating value (RSI & STO are absolute measures).

In my humble opinion, the Model needs to be confronted with a new independent variable in what could be a multiple factor model. Again, one of your friend's Billy indicator comes to mind: MA[$TICK] (600 minutes) slope would do a good job of measuring "the trend" and it is independent because it has nothing to do with volume analysis.

But the point here is not the variable to pick given that you are in a better position that any of us to figure out the best one(s) to select, but the addition of other variables.

Pierre Brodeur

Thank you for this interesting discussion on the OB/OS and what other indicator might "fill the gap".
It is good to have experienced traders here and read these suggestions.

I understand that we are focusing on the OB/OS indicator because it missed to signal a bounce by a hair.
But, is it the culprit? When I started my trading education, I first read Alexander Elder's book (Come into my trading room,) where I found the best reference there, which has helped me a lot: the "hound of baskervilles." It is when the market moves in an unexpected way, contrary to your indicators. In such a situation you might conclude that there is something that should have been there but that was not measured.

Will a breath indicator solve the problem? In the first place, what IS the problem?
Is it the fact that we did not detect a new uptrend or is it the fact that we were in a short mode?

First, we need to remember that we are dealing with oscillators both for the upper and for the lower panel. This means that signals could be generated either due to a natural move of the oscillator below or above the 0 level (for the upper panel) or bounce above an oversold level - or due to real buying/selling moves.

The real issue with the 20DMF is that there are states from which the indicator is in a "non-recoverable" mode. The In such situations, 20DMF is waiting in cash until the next cycle starts. For example, today the 20DMF is in cash, waiting for a short signal.

What is really lacking on the 20DMF is the fear measure and its counterpart - the liquidity injection that compensates the fear. Cumulative ticks could be a good measure of liquidity injection, as liquidity would push all the stocks evenly.
Another measure that is available and not used on the 20DMF is the ATR. The ATR is the Average True Range. This is a good measure of market fear, as it catches the evolution of the max distance between High/Low/previous day close. When liquidity comes in, the ATR would be lowered, as any drop in price during the day would be compensated by HFTs pushing prices higher.

This reminds me that two other models use the ATR:

- The GDX OB/OS model always measures the ATR level and inhibits short signals when the ATR is too low.
- The IWM Robot measures the ATR "direction" when the 20DMF is in a "cover your shorts" mode and decides whether the previous trend should be continued or not depending on that specific measure.

As we can see in the figure below, we had a short signal on December 15. This was due to the fact that we reverted to a cash situation on a fail safe mechanism, which was followed by a new short signal - on a MF negative state.
However, this short signal occurred in a context of decreased fear (ATR for IWM falling below 3%.)

This a path that I'd like to explore.


Pascal




12699

Pascal
02-08-2012, 05:04 AM
- Pascal, could you please clarify: Was it five stocks per sector, or total of five from all sectors? Was it buying when a 20DMF signal is issued, and holding until next short, or something else?

In synch with the 20DMF means that you buy when the 20DMF issues a buy signal and keep the position until the 20DMF signal change. The selection process is to first select the five weakest sectors in terms of 20D price RS, take all the stocks in these five sectors and sort them by AB/LER (Take the five closest to LB that show a strong accumulation pattern in terms of LER.) The idea here is to ride the shorts covering phase.

On the short side, the selection was also to short the weakest sectors in terms of price RS. And in these sectors, select the stocks whose AB is closest to UB with the weakest LER. This is also basically because shorts will go after the weakest stocks first that have bounced to their resistance level.

This is very different from a CANSLIM approach on stocks selection, but the "weakest sectors" selection process is rather risky, because it all depends on the MDM. If the MDM is wrong, you will be wrong footed in a big way. Also, I believe that after the initial shorts covering phase of the first 5 to 10 days after a buy signal, the advantage of targeting the weakest sector might disappear. Therefore, execution timing is rather important, which I do not believe is a strong point of a human trader. A human trader will enter slowly, consider risk/profits, etc. A large fund will be even more prudent I believe. However, the market is now mostly traded by machines. These trade momentum, volatility and liquidity machines tend to forget fundamentals (especially for the past few years.)



-

- Could you please clarify the specific selection criteria: For longs, is it weak RS only, or you look at money flow as well? The timeframe for RS - is it 20 days? For shorts, what is the definition of 'overbought' in this context? ... I am trying to understand how EV is used here, and whether we are looking at simple mean reversion at the sector level.

I have encountered in the past added value for mean reversion of individual stocks within a strong sector, and for longer timeframe sector momentum; this seems to be quite different and intriguing.

The Overbought selection criterium was used only in the context of trading sectors not in synch with the market.
Hence I showed that buying every 20 days the weakest sectors was a strategy that produced good results in a strong uptrend (2009) and selling the most overbought sectors every 20 days was a strategy that worked well in a continuously down market (2008). Both strategies worked miserably in 2010 and 2011. This means that we need to work in synch with the market.

This is also the reason why the stock filters must be used also in synch with the market direction.



Pascal

adam ali
02-08-2012, 08:25 AM
Pascal,

Does the RT 20DMF signal information go back to 2007, i.e., inception? If so, how many instances were there of the 20DMF exceeding the -70 mark intra-day but closing above it?

Pascal
02-08-2012, 09:53 AM
Pascal,

Does the RT 20DMF signal information go back to 2007, i.e., inception? If so, how many instances were there of the 20DMF exceeding the -70 mark intra-day but closing above it?

I do not know, because the lower panel is an EOD calculation process.
I do not have intraday calculations for that indicator.


Pascal

Adriano
02-08-2012, 09:57 PM
As far as I know fuzzy logic is well suited for tackling threshold problems, but I don't know how go any further than this.
http://en.wikipedia.org/wiki/Fuzzy_logic

asomani
02-09-2012, 02:00 AM
A few ideas I'll throw out there to encourage more brainstorming (I apologize if they are impractical or don't seem reasonable, but, they are at least worth thinking about - I hope):

-Instead of taking -70 as the 20DMF oversold level, normalize the 20DMF historical values. In other words, take all the 20DMF historical values and split them into 100 buckets or percentiles. This can be done in Excel using the PercentRank function. Then, test what percentile of 20DMF values works best as an oversold level (could be the 15th percentile of values arranged in descending order, for example). This way, you're not looking at 20DMF values on an absolute basis, but, instead on a normalized / relative basis - which self-adapts to the market as more data is collected and the computer puts the data into the buckets.

-Consider employing regime-switching. In other words, categorize the market environment into one of six categories, for example:
-low volatility uptrend
-high volatility uptrend
-low volatility downtrend
-high volatility downtrend
-low volatility sideways or not trending
-high volatility sideways or not trending

Detecting when the market is in one of the above environments is the hard part, but, this can be thought about. For example, to assess the volatility profile of the market, one could use a PercentRank of 21-day historical volatility values for the past year (based on IWM or SPY, for instance). To assess whether the market is trending, one could use something like the ADX indicator or Trend Strength Index (search online for the latter). And so on...

Once you're able to detect the regime of the market effectively, determine which settings for the 20DMF and Robot are best in each regime - so as to optimize risk-adjusted returns within each regime. An obvious problem here will be the lack of a historical data set to work with for each regime (and in general), as I think 20DMF values have only been available since sometime in 2007 (?).

-A nice complementary or potentially confirming indicator to the 20DMF in terms of detecting oversold levels may be the S&P Oscillator. I keep track of the S&P Oscillator for the S&P 500 via MarketTells and have found, at least in my experience, that a reading of -6.5 or below (yes, this is an absolute rather than normalized level...I know) normally coincides with an oversold condition in the 20DMF. The last -6.5 or below reading was on Dec. 19, 2011, when the Oscillator just barely triggered oversold by hitting -6.6 (I believe this was the oversold period that the 20DMF just barely missed seeing as "oversold"). A spreadsheet of historical values can be downloaded from the MarketTells website should you wish to look into this further. Also, the S&P Oscillator can be calculated for other indices (NYSE common-stock-only, Nasdaq Composite, etc....even Russell 2000, providing one has the requisite advance/decline and up/down volume data for that index). MarketTells has it calculated for the S&P 500 and NYSE-common stock-only, I believe. I'm most comfortable using the S&P 500 version, as I find the -6.5 threshold on it particularly useful for detecting oversold conditions.

-Consider keeping track of POMO operations or using Bob's liquidity indicator, so that the 20DMF and/or Robot is able to get an idea if there is a Fed-supported put underneath the market, and thereby perhaps modify how it operates (it may operate more conservatively on the short side and more aggressively on the long side when liquidity is thought to be more than ample, for example). I know this idea has already perhaps been suggested, along with incorporating the $TICK indicator into the Robot somehow. But, I'm repeating it here nonetheless.

-Consider incorporating seasonality into the 20DMF model and/or Robot. I know seasonality is not thought to be a strong indicator, but, it has stood the test of time in some cases - like the end-of-month / beginning-of-month window dressing (last 4 trading days of month ending and first 2-3 days of month beginning) along with the Oct-Apr or Nov-Apr seasonally strong period - for example. An oversold condition that occurs in the early part of the window dressing period or right before the window dressing period often turns out to be a good at least short-term buying opportunity, for instance. Meanwhile, the biggest drops in the market tend to happen between May-Oct/Nov, I believe. Selloffs that start during these months should typically be taken more seriously than selloffs that start in the remainder of the year.

-One would think that the top and bottom of the month can happen at anytime in the month, close to equally. But, I think Michael Stokes at MarketSci did some research showing that the top or bottom of the month happens in the first 7 trading days of the month about 80% of the time. Perhaps this fact (although it needs to be confirmed) would be useful to keep in mind in programming the 20DMF and/or Robot. Maybe there is some good way to take advantage of it.

There is lots more that could be said, but, I must stop here due to time constraints. I hope many more will join in sincerely contributing to this thread.

Rembert
02-09-2012, 05:02 AM
As far as I know fuzzy logic is well suited for tackling threshold problems, but I don't know how go any further than this.
http://en.wikipedia.org/wiki/Fuzzy_logic

Regarding Fuzzy Logic ... instead of passing a long/short/neutral signal to the robot the 20DMF could perhaps pass a parameter ranging from -100 to 100 with 0 being the most neutral. The robot could then use this parameter in it's decision making process. How that parameter is calculated and how the robot would use it is another matter of course.

Pascal
02-09-2012, 05:21 AM
-Instead of taking -70 as the 20DMF oversold level, normalize the 20DMF historical values. In other words, take all the 20DMF historical values and split them into 100 buckets or percentiles. This can be done in Excel using the PercentRank function. Then, test what percentile of 20DMF values works best as an oversold level (could be the 15th percentile of values arranged in descending order, for example). This way, you're not looking at 20DMF values on an absolute basis, but, instead on a normalized / relative basis - which self-adapts to the market as more data is collected and the computer puts the data into the buckets.

The lower panel of the 20DMF is normalized as of now between -100 and +100.
Your suggestion is however something that I already use to evaluate the OB/OS levels of the GDX_MF. One issue that I have though is that during 2008/2009, the OB/OS levels were much more extended than the OB/OS levels of 2010/2011. Something to study related to the evolution of long-term volatility.






-Consider employing regime-switching. In other words, categorize the market environment into one of six categories, for example:
-low volatility uptrend
-high volatility uptrend
-low volatility downtrend
-high volatility downtrend
-low volatility sideways or not trending
-high volatility sideways or not trending

Detecting when the market is in one of the above environments is the hard part, but, this can be thought about. For example, to assess the volatility profile of the market, one could use a PercentRank of 21-day historical volatility values for the past year (based on IWM or SPY, for instance). To assess whether the market is trending, one could use something like the ADX indicator or Trend Strength Index (search online for the latter). And so on...

Once you're able to detect the regime of the market effectively, determine which settings for the 20DMF and Robot are best in each regime - so as to optimize risk-adjusted returns within each regime. An obvious problem here will be the lack of a historical data set to work with for each regime (and in general), as I think 20DMF values have only been available since sometime in 2007 (?).



I do have 20DMF data prior to 2007, but the uptick rules for shorts was abolished in July 2007 I believe. before that date, MF data was skewed to the long side.






-A nice complementary or potentially confirming indicator to the 20DMF in terms of detecting oversold levels may be the S&P Oscillator. I keep track of the S&P Oscillator for the S&P 500 via MarketTells and have found, at least in my experience, that a reading of -6.5 or below (yes, this is an absolute rather than normalized level...I know) normally coincides with an oversold condition in the 20DMF. The last -6.5 or below reading was on Dec. 19, 2011, when the Oscillator just barely triggered oversold by hitting -6.6 (I believe this was the oversold period that the 20DMF just barely missed seeing as "oversold"). A spreadsheet of historical values can be downloaded from the MarketTells website should you wish to look into this further. Also, the S&P Oscillator can be calculated for other indices (NYSE common-stock-only, Nasdaq Composite, etc....even Russell 2000, providing one has the requisite advance/decline and up/down volume data for that index). MarketTells has it calculated for the S&P 500 and NYSE-common stock-only, I believe. I'm most comfortable using the S&P 500 version, as I find the -6.5 threshold on it particularly useful for detecting oversold conditions.

-Consider keeping track of POMO operations or using Bob's liquidity indicator, so that the 20DMF and/or Robot is able to get an idea if there is a Fed-supported put underneath the market, and thereby perhaps modify how it operates (it may operate more conservatively on the short side and more aggressively on the long side when liquidity is thought to be more than ample, for example). I know this idea has already perhaps been suggested, along with incorporating the $TICK indicator into the Robot somehow. But, I'm repeating it here nonetheless.

-Consider incorporating seasonality into the 20DMF model and/or Robot. I know seasonality is not thought to be a strong indicator, but, it has stood the test of time in some cases - like the end-of-month / beginning-of-month window dressing (last 4 trading days of month ending and first 2-3 days of month beginning) along with the Oct-Apr or Nov-Apr seasonally strong period - for example. An oversold condition that occurs in the early part of the window dressing period or right before the window dressing period often turns out to be a good at least short-term buying opportunity, for instance. Meanwhile, the biggest drops in the market tend to happen between May-Oct/Nov, I believe. Selloffs that start during these months should typically be taken more seriously than selloffs that start in the remainder of the year.

-One would think that the top and bottom of the month can happen at anytime in the month, close to equally. But, I think Michael Stokes at MarketSci did some research showing that the top or bottom of the month happens in the first 7 trading days of the month about 80% of the time. Perhaps this fact (although it needs to be confirmed) would be useful to keep in mind in programming the 20DMF and/or Robot. Maybe there is some good way to take advantage of it.

There is lots more that could be said, but, I must stop here due to time constraints. I hope many more will join in sincerely contributing to this thread.

Thank you for all these ideas. I am now working on the ETFs linked MF for deployment still within February with their related RT figures.

Once this is done, I'll come back to reworking the 20DMF.


Pascal

pdp-brugge
02-09-2012, 05:47 AM
Following the discussion in this thread I have the feeling that I personally would like to follow the track of selecting a number of stocks from the best sectors that show the best AB/LER combination.

I would like to backtest that idea.

I only have the EV data since I joined this forum (October2011).
For my back test to be a bit reliable I would like to get more data then these 4 months.

Is it possible to obtain a file with all the stocks of the PascalA_List with “AB Buy signal”, “Extension Tot EV”,“LER” & “Rating” and this for the longest period possible?

PdP

Rembert
02-09-2012, 06:00 AM
Following the discussion in this thread I have the feeling that I personally would like to follow the track of selecting a number of stocks from the best sectors that show the best AB/LER combination.

Just my personal oppinion but I wouldn't complicate the robot by selecting individual stocks. Index ETF's are nice and easy. No worries about liquidity, earnings, diversification etc.

pdp-brugge
02-09-2012, 06:43 AM
Hi Rembert,

It is not my intention to suggest that the robot would use individualstocks.
I am considering to use, beside trading the robots, alsotrading a discretional system.
This discretional system would use stock picking according theEV data.

Regards

PdP

Rembert
02-09-2012, 07:21 AM
Ah ok, I understand. Besides a couple of ETF robots I also trade individual stocks discretionary but I don't use any EV concepts for those.

Adriano
02-10-2012, 07:14 PM
Regarding Fuzzy Logic ... instead of passing a long/short/neutral signal to the robot the 20DMF could perhaps pass a parameter ranging from -100 to 100 with 0 being the most neutral. The robot could then use this parameter in it's decision making process. How that parameter is calculated and how the robot would use it is another matter of course.

This is what I was talking about:
http://www.lotsofessays.com/viewpaper/1690480.html

Googling "fuzzy logic and stock market" gives lots of results. Just an idea anyway.

grems8544
02-11-2012, 12:04 PM
This is what I was talking about:
http://www.lotsofessays.com/viewpaper/1690480.html

Googling "fuzzy logic and stock market" gives lots of results. Just an idea anyway.

I have the MATLAB fuzzy logic toolbox and have played with fuzzy math off and on for many years. I'm no expert, but I understand the fundamentals enough to poke around.

The greatest challenge for me is backtesting a fuzzy system. I find it difficult to create a test harness (e.g., known stimulus as the input with predictable output). Without this, I have little confidence in what is considered "normal" behavior versus what is considered outside the normal distribution. While I think fuzzy logic can have a place, especially the "porosity" factors that we employ here, I've never been able to build a winning system based on fuzzy math alone.

In discussion of this with a math guru who uses fuzzy systems with control systems, if we view our trading system as a self-contained entity, we have to have some confidence that the manipulations that we do on the data result in a stable system, e.g., one that won't take our equity to the ground (drawdown) with an expectation that we'll achieve higher gains. These constraints are valid, but they steer the system towards standard Euclidean logic and away from the fuzziness that we're intending.

The correct answer is probably somewhere in the middle of both models, but again, without a robust way of testing, it's hard to make the jump with real monies.

As an aside, GGT handles this situation in a different manner. While I maximize on equity to derive a set of coefficients that describe optimal moving averages and rates of change, I "lop off" the top of the equity mountain and try to maximize the area of the plateau where the outside conditions (market variables) do not dramatically change the optimal solution.

Think of it this way ... you have two variables, EMA1 and EMA2. For a given stock price series over the past 2 years, there is a unique combination of EMA1 and EMA2 values which maximize the equity of that system. We could pick EMA1 and EMA2 and use those values, but if the market moves just a tad against us, we could see our equity drop off FAST. This situation would exist if there was a gradual slope in the equity curve as EMA1 was held constant and EMA2 was varied to produce the maximum. If EMA2 goes too far, we could see a "drop off the equity cliff". This sensitivity is very dangerous to our portfolio, and it is why most systems do not work well with crossing MAs.

Instead, ask yourself how much of the mountain top can you "lop off" flat so that a marble rolling around on this new plateau does not "fall off". Of course, you could "lop off" everything until the marble is on flat ground with everything around -- it will never "fall off" the plateau, but then again, you're not making money. But you could "lop off" enough of the mountain to keep you on a higher plateau than any surrounding peak -- and now you're more stable to market conditions if the "optimal" EMA1 and EMA2 are adjusted to the geometric center of this plateau.

This is more or less what GGT attempts to do, and perhaps there is a lesson here for the model here. Not all stocks/ETFs in the GGT system have a solution that is robust -- this is what the metrics on my sheet tell me, but for many, they behave very well.

The GGT coefficients are updating 24/7, and every week about 15%-20% of the stock database receives updated numbers (sometimes they change, sometimes they do not), and about 25% of the ETFs get new values. This keeps the backtest data window sliding forward ever week on a new basket of stocks, so that the optimization does not get too far from reality.

Food for thought ...

Regards,

pgd

Adriano
02-11-2012, 04:33 PM
I have the MATLAB fuzzy logic toolbox and have played with fuzzy math off and on for many years. I'm no expert, but I understand the fundamentals enough to poke around.

The greatest challenge for me is backtesting a fuzzy system. I find it difficult to create a test harness (e.g., known stimulus as the input with predictable output). Without this, I have little confidence in what is considered "normal" behavior versus what is considered outside the normal distribution. While I think fuzzy logic can have a place, especially the "porosity" factors that we employ here, I've never been able to build a winning system based on fuzzy math alone.

In discussion of this with a math guru who uses fuzzy systems with control systems, if we view our trading system as a self-contained entity, we have to have some confidence that the manipulations that we do on the data result in a stable system, e.g., one that won't take our equity to the ground (drawdown) with an expectation that we'll achieve higher gains. These constraints are valid, but they steer the system towards standard Euclidean logic and away from the fuzziness that we're intending.

The correct answer is probably somewhere in the middle of both models, but again, without a robust way of testing, it's hard to make the jump with real monies.


Interesting, thanks. I know MATLAB and I think it's fantastic, but honestly I never used the fuzzy logic toolbox, only the image processing toolbox and a little bit of the neural nets stuff. With NN I also faced a somehow similar problem some years ago, but that was for an abstract animation/electronic sound piece, nothing to do with financial stuff. I agree that the porosity issue should be handled well by fuzzy logic and I wish I could give a more real solution, I just don't have the math knowledge to do it.



As an aside, GGT handles this situation in a different manner. While I maximize on equity to derive a set of coefficients that describe optimal moving averages and rates of change, I "lop off" the top of the equity mountain and try to maximize the area of the plateau where the outside conditions (market variables) do not dramatically change the optimal solution.

Think of it this way ... you have two variables, EMA1 and EMA2. For a given stock price series over the past 2 years, there is a unique combination of EMA1 and EMA2 values which maximize the equity of that system. We could pick EMA1 and EMA2 and use those values, but if the market moves just a tad against us, we could see our equity drop off FAST. This situation would exist if there was a gradual slope in the equity curve as EMA1 was held constant and EMA2 was varied to produce the maximum. If EMA2 goes too far, we could see a "drop off the equity cliff". This sensitivity is very dangerous to our portfolio, and it is why most systems do not work well with crossing MAs.

Instead, ask yourself how much of the mountain top can you "lop off" flat so that a marble rolling around on this new plateau does not "fall off". Of course, you could "lop off" everything until the marble is on flat ground with everything around -- it will never "fall off" the plateau, but then again, you're not making money. But you could "lop off" enough of the mountain to keep you on a higher plateau than any surrounding peak -- and now you're more stable to market conditions if the "optimal" EMA1 and EMA2 are adjusted to the geometric center of this plateau.

This is more or less what GGT attempts to do, and perhaps there is a lesson here for the model here. Not all stocks/ETFs in the GGT system have a solution that is robust -- this is what the metrics on my sheet tell me, but for many, they behave very well.

The GGT coefficients are updating 24/7, and every week about 15%-20% of the stock database receives updated numbers (sometimes they change, sometimes they do not), and about 25% of the ETFs get new values. This keeps the backtest data window sliding forward ever week on a new basket of stocks, so that the optimization does not get too far from reality.

Food for thought ...

Regards,

pgd

Yes, I know the peak/plateau issue, certainly peak values are not reliable. I also update myself some parameters of the four trading systems I use, two of them being the VIT robots. I do that with AmiBroker, before placing a new trade to get the best position sizing values. I don't trade stocks at the moment, so this makes it easier for me.

Regards,
Adriano

Pascal
02-13-2012, 04:23 AM
Over the week-end and last week, I have applied the GDX model to a few industry groups: XLE, XLI, XLK, XLU and SPY.

The results are below:

12795

First, let me say that this is "on-going work". There is still further analysis work to be made in regards to:
- Draw downs
- Trade statistics
- Correlation
- Stock selection within each industry group
- Complete these tables with XLB, XLY, XLP, XLV

What I did was simply to use the OB/OS GDX MF model and apply it "as is" to the different industry group.
The OB/OS and the porosity levels are automatically adapted (using only past but more recent data than very old past data).

Let me already comment on the first results:

1. In blue, I highlighted the positive returns for 2008 on the 20DMF, SPY and GDX models groups, while the four other groups were negative. There is one reason for this: data! For the four XL groups, I indeed took the group's composition/weight as of today and applied it down to 2008. However, in 2008/2009, there were 63 changes in the S&P500, 16 in 2010 and 12 in 2011. With the weights also changing, this means that the older the data, the less reliable the results will be. For the S&P500, I manually kept track of all the past changes (I did not do that for the underlying groups.) I also think that for GDX, the index has been very stable since many years with the larger stocks taking the "bulk" of the index: ABX, GG, KGC, SLW, etc...

2. Because of this and also because 2008/2009 were really exceptional trending years, I prefer to concentrate on the results for the past two years, shown in yellow and green. The yellow represents the total of the past two years, while the green color highlights the difference between the model and the corresponding ETF return.

We can see that:
2.A. The model works well for XLK, XLI, XLE and less so for SPY and XLU. I understand that it would not work well for SPY, which involves all the sectors, while each sub-group is more focused and hence, the movements of money are easier to detect when we analyse each group separately. This however does not explain the fact that the model does not act so well with XLU. It might be a lack of volatility in this sub-group, but I had no time to study trades in detail to point this out.
2.B. We should also note that XLU ETF acted well in 2011, while all other sectors were poor. We indeed had a defensive market in 2011. However, the model could take advantage of the groups' inherent volatility.
2.C. GDX offered the strongest returns. This is also due to the higher volatility and waves of changes that is a characteristic of this sector.

3.For the past two years, the XLE model did better than the 20DMF and the XLI model did almost as good as the 20DMF. This means that there is something to dig around here with probably the possibility to rotate between industry groups independently from the 20DMF itself.


Pascal

TraderD
02-13-2012, 07:51 AM
2. Because of this and also because 2008/2009 were really exceptional trending years, I prefer to concentrate on the results for the past two years, shown in yellow and green. The yellow represents the total of the past two years, while the green color highlights the difference between the model and the corresponding ETF return.
Pascal

Pascal, this looks like a great start and it would be interesting to see what the detailed trade stats look like for the various ETFs. I believe that 2008-2011 is a unique diverse combination of trending and range-bound market environments and it would be important to look at all 4 years when scrutinizing performance. My crystal ball that often acts as a reliable inverse indicator says we won't see 2008/9 again, which makes me believe that we absolutely should keep an eye on how the models behaved during that time.

Trader D

Pascal
02-14-2012, 05:24 AM
These are the results of the same model applied to the SP500 components.

You will note that the average return is doing better than the SPY return. This simple fact shows that it must be possible at any time to select the three or four best ETFs and trade in/out of a rotation of the best ETFs.
This means: managing a portfolio on these ETFs should produce better returns than trading SPY.

If someone volunteers to do the work, I can prepare a file of the signals evolution for all the ETFs starting in 2007 (Just send me a private mail.)

I still need to analyse all the trades, which I am sure will point out to some model weaknesses and further improvements. Once this analysis is completed, then we will quickly have the RT graphs for these ETFs with the corresponding trading signals/distance to the next signal.



Pascal

12840

TraderD
02-14-2012, 09:26 AM
These are the results of the same model applied to the SP500 components.
You will note that the average return is doing better than the SPY return. This simple fact shows that it must be possible at any time to select the three or four best ETFs and trade in/out of a rotation of the best ETFs.
This means: managing a portfolio on these ETFs should produce better returns than trading SPY.
I still need to analyse all the trades, which I am sure will point out to some model weaknesses and further improvements. Once this analysis is completed, then we will quickly have the RT graphs for these ETFs with the corresponding trading signals/distance to the next signal.
Pascal


2008 stats look particularly intriguing here. For example, assuming the model is roughly symmetric in its long/short settings, why would it perform much poorer (on most ETFs) in 2008 (down trending) than in 2009 (up trending)? I would guess a closer look at the trades is needed to answer that. Some candidates for inspection: volatility threshold (criterion for short entry), trade duration, etc.

Trader D

Pascal
02-14-2012, 10:04 AM
2008 stats look particularly intriguing here. For example, assuming the model is roughly symmetric in its long/short settings, why would it perform much poorer (on most ETFs) in 2008 (down trending) than in 2009 (up trending)? I would guess a closer look at the trades is needed to answer that. Some candidates for inspection: volatility threshold (criterion for short entry), trade duration, etc.

Trader D

In fact, you are not totally correct:

First, the model is not symmetric, because the MF is not and the model follows the MF.
Then, I would say that the model performed better in 2008 than in 2009: it outperformed the market in 2008 and slightly underperformed it in 2009. Both were strongly trending market when overbought became more overbought and oversold became more oversold.

This means that in a model that has OB/OS triggers, in a strong market, the model will stay on the side-lines after having sold in OB or covered in OS, just to see the market continuing in the same direction.

Finally, as I wrote yesterday, in 2008/2009, the data was 20% different, both in term of tickers and in terms of weight. The question is: is 20% difference in data important or not? I do not know, but I know that when I switch from a fixed weight to a capitalized weight calculation, the results drastically degrade to the point that the model becomes impossible to use. The MF calculation method must follow the ETF price calculation method. But in 2008/2009, we had a 20% difference.

That being said, I'll write some code to automatically study all the trades year by year and ETF by ETF


Pascal

TraderD
02-14-2012, 11:52 AM
In fact, you are not totally correct:
First, the model is not symmetric, because the MF is not and the model follows the MF.
Then, I would say that the model performed better in 2008 than in 2009: it outperformed the market in 2008 and slightly underperformed it in 2009. Both were strongly trending market when overbought became more overbought and oversold became more oversold. This means that in a model that has OB/OS triggers, in a strong market, the model will stay on the side-lines after having sold in OB or covered in OS, just to see the market continuing in the same direction.


Looks like I got it the other way around if accepting the notion that a MF-based model is to be benchmarked against the general market, since both 2008 and 2009 were one-sided in all ETFs (up or down) during the respective year. So what does explain the ability of the model to beat all ETFs in 2008 but fail to outperform in 2009? Is it the point where the OB/OS oscillator decides to give up and jump to the sidelines? Why would it do so in 2009 earlier than in 2008? Devil in the details...



Finally, as I wrote yesterday, in 2008/2009, the data was 20% different, both in term of tickers and in terms of weight. The question is: is 20% difference in data important or not? I do not know, but I know that when I switch from a fixed weight to a capitalized weight calculation, the results drastically degrade to the point that the model becomes impossible to use. The MF calculation method must follow the ETF price calculation method. But in 2008/2009, we had a 20% difference.

That being said, I'll write some code to automatically study all the trades year by year and ETF by ETF
Pascal

I can see how a composition (as well as market cap weighting, if applies) difference could be the culprit (it's a form of lookahead bias too). Overall, this looks promising, I'm sure detailed testing will unearth interesting findings.

Trader D

TraderD
02-15-2012, 12:59 PM
This simple fact shows that it must be possible at any time to select the three or four best ETFs and trade in/out of a rotation of the best ETFs.

Pascal, do you mean that the MF-based indicator can be used to drive rotation in/out of the best ETFs or a more standard price-based (e.g. momentum, relative strength) indicator? Also, without a rolling time view into the ETF trade stats, I don't see how it's possible to tell at what frequency rotation would have to be applied to achieve the desired effect?

Trader D

Pascal
02-15-2012, 02:38 PM
Pascal, do you mean that the MF-based indicator can be used to drive rotation in/out of the best ETFs or a more standard price-based (e.g. momentum, relative strength) indicator? Also, without a rolling time view into the ETF trade stats, I don't see how it's possible to tell at what frequency rotation would have to be applied to achieve the desired effect?

Trader D

This is work in progress as of now.
When I get results I will publish them.


Pascal

TraderD
02-15-2012, 04:11 PM
This is work in progress as of now.
When I get results I will publish them.
Pascal
That's understood, looking forward to it.

Trader D

Pascal
02-20-2012, 12:07 PM
In the past days, I reworked the ETF MF model in order to automatically detect the OB/OS levels and adapt the model to the changing market conditions.

The Table below does not show a notable improvement, but the model itself is now simplified as there are basically no manually adjustable "knobs" anymore. I also attach the trade data for a three ETFs portfolio. These portfolio data files have been built with the help of our fellow member Ellis. I also want to thank you for the numerous offers of assistance that I received.

We are still working on a method to select the best ETFs to trade at any time.

One interesting information: in order to avoid a continuous cash position, there should be no more than three positions to trade simultaneously. This means: each position should be no less than 1/3 of the portfolio.
This is due to the fact that the method does not generate enough signals to be fully invested in more than three positions.



Pascal

12960

Last 2 years Equity curve for 3 ETFs portfolio

12961

Last 2years Drawdown for 3 ETFs portfolio

12958

Last 4 years Equity curve for 3 ETFs portfolio

12959

Last 4 years Drawdown for 3 ETFs portfolio

12963
4 years Summary Table for 3 ETFs portfolio

12962

TraderD
02-20-2012, 01:42 PM
One interesting information: in order to avoid a continuous cash position, there should be no more than three positions to trade simultaneously. This means: each position should be no less than 1/3 of the portfolio.
This is due to the fact that the method does not generate enough signals to be fully invested in more than three positions.


Pascal, if I understand correctly, the model average stats refer to a theoretical fully-invested portfolio of all 9 ETF stats which cannot possibly be replicated in reality and may deviate considerably from stats of a portfolio that holds 1/9th of each ETF during the test period(s).

Question: What was the selection criteria used for the 3-ETF test shown?

Thanks,

Trader D

Pascal
02-20-2012, 02:16 PM
Pascal, if I understand correctly, the model average stats refer to a theoretical fully-invested portfolio of all 9 ETF stats which cannot possibly be replicated in reality and may deviate considerably from stats of a portfolio that holds 1/9th of each ETF during the test period(s).

Question: What was the selection criteria used for the 3-ETF test shown?

Thanks,

Trader D

The average return is the return as if you invested 1/9 of your portfolio in each ETF.
In reality, you can easily invest 1/9 of your portfolio in each ETF.

There was no selection criteria to take the 3 ETFs. We just took the first three available ETFs that issued a signal.
It is purely random as of now. This random selection portfolio performs better simply because there is more money being invested in the market using only 3 ETFs on a rotating base than investing 1/9 in 9 ETFs.

We are now working on a selection process of these ETFs.


Pascal

TraderD
02-20-2012, 05:21 PM
The average return is the return as if you invested 1/9 of your portfolio in each ETF.
In reality, you can easily invest 1/9 of your portfolio in each ETF.


Very well, it's more realistic than I thought it was. It'd be useful to know stats regarding the 9-ETF rotation, e.g. average leverage throughout the 4-year period, longest period(s) when 5 or more (out of 9) ETFs are in cash, etc.



There was no selection criteria to take the 3 ETFs. We just took the first three available ETFs that issued a signal.
It is purely random as of now. This random selection portfolio performs better simply because there is more money being invested in the market using only 3 ETFs on a rotating base than investing 1/9 in 9 ETFs. We are now working on a selection process of these ETFs.
Pascal

A monte-carlo series of runs with random 3-ETF selection would be a good way to verify robustness.

$.02,

Trader D

Harry
02-22-2012, 06:49 AM
I have always wondered as to how the +/-1000 stocks/ETF's that compromise the 20DMF model were/are selected? The reason I ask is - would have removing and replacing (say even 10 for a 1% change in the composition) impact the calculations significantly? I know we don't want to backfit, but wonder about the impacts of a difference set on the overall 20DMF model? Same question goes for the 4 inverse ETF's - why choose these 4 versus other inverse ETF's?

I am sure these questions were answered long ago when you were building the model.

Pascal
02-22-2012, 11:07 AM
I have always wondered as to how the +/-1000 stocks/ETF's that compromise the 20DMF model were/are selected? The reason I ask is - would have removing and replacing (say even 10 for a 1% change in the composition) impact the calculations significantly? I know we don't want to backfit, but wonder about the impacts of a difference set on the overall 20DMF model? Same question goes for the 4 inverse ETF's - why choose these 4 versus other inverse ETF's?

I am sure these questions were answered long ago when you were building the model.

The 20DMF is a sectors based model. When I started working with it, I had about 60 sectors. The selection process was straightforward: whenever I could find at least 5 stocks in the same sector (searched through IBD, Yahoo and by reading the annual reports) I created a sector. Adding or retrieving a stock from a given sector has almost no bearing on the indicator, because each of the 96 sectors has an equal weight. Therefore, if there are let say 10 stocks in one sector, removing 10 from 10 different sectors would have much less than 1% impact on the general model. It is only if I started to completely remove or add many sectors that there would be an impact.

For example, yesterday, I added about 6 stocks in different sectors (Leisure equipment, drugs, etc). I do that only for sectors that include only five or six stocks, so that each stocks does not have a strong influence on the sector MF.

With this approach, a stock like AAPL, even though it moves its own MF sector by almost 80%, Its influence on the 20DMF is no more than 1/96

Regarding the four inversed ETFs, I just took those with the strongest volume. I selected double inversed, because traders would close position much quicker on these type of ETFs and hence, moves would be detected much quicker than on non-leveraged ETFs.


Pascal

Harry
02-22-2012, 11:59 AM
Thank you for the detailed answer. Also helps explain why the indices can run away from the 20DMF model as the former can be primarily steered by just a few securities due to weighting.

Pascal
02-24-2012, 03:21 PM
In the past two days, I have been working on finding out why using the same trading model, GDX is performing much better than most other ETFs.

I believe that the answers originates from the comparison of the two attached figures.

The first one shows the correlation between each ETF 20D price gain and the weighted price gain that is calculated using the latest weight/composition of each ETF. Because the weight and compositions were different in the past, we can see that the further we move back in the past, the less correlated the calculated price gain is from the ETF price gain.

You will note that there are spikes, which I believe correspond to the re-weighting of the ETFs. XLF in yellow show that the managers have been working hard to re-weight the ETF that has been hit by negative news since 2008.

You will note that for GDX, there is basically no spike, whick means almost a perfect correlation between the data and the measure. Hence, a good outcome for the model (At least, it is my explanation.)

As a reminder, the EV method calculates a fine supply/demand equilibrium. Usually, the overbought level is hit when the imbalance reaches 1.5 to 2% of the total volume. A 10% change in correlation for 2008 and 5% for 2009 shows that we are within a measurement error for the EV type of indicator.

This also shows that the EV based indicator is "fragile" as it depends on the accuracy of the measurements! So, we'll need to pay great care to the data and the weights.



Pascal


13039
13038

Pascal
02-26-2012, 10:25 AM
In the past two days, I have been working on finding out why using the same trading model, GDX is performing much better than most other ETFs.

I believe that the answers originates from the comparison of the two attached figures.

The first one shows the correlation between each ETF 20D price gain and the weighted price gain that is calculated using the latest weight/composition of each ETF. Because the weight and compositions were different in the past, we can see that the further we move back in the past, the less correlated the calculated price gain is from the ETF price gain.

You will note that there are spikes, which I believe correspond to the re-weighting of the ETFs. XLF in yellow show that the managers have been working hard to re-weight the ETF that has been hit by negative news since 2008.

You will note that for GDX, there is basically no spike, which means almost a perfect correlation between the data and the measure. Hence, a good outcome for the model (At least, it is my explanation.)

As a reminder, the EV method calculates a fine supply/demand equilibrium. Usually, the overbought level is hit when the imbalance reaches 1.5 to 2% of the total volume. A 10% change in correlation for 2008 and 5% for 2009 shows that we are within a measurement error for the EV type of indicator.

This also shows that the EV based indicator is "fragile" as it depends on the accuracy of the measurements! So, we'll need to pay great care to the data and the weights.



Pascal


13039
13038

You can find below a summary table of trades for 2010/2011 executed using an identical model applied to all 9 XL(i) ETFs. The XLS file includes all the detail of these trades (except for DD). The GDX trade details are also included in teh XLS file.

It is interesting to see the following:
- There are more Buy than Short trades, because there is an ATR filter set on short trades.
- Short trade have a winning/losing days ratio lower than 1, even though the trade outcome is in general positive. This mean that short trades are more volatile than long ones. We might also expect that most of the drawdowns will occur during missed short trades.

You will note that 2008 and sometimes 2009 results are under performing, but the ETF component weighting data for these two years is not reliable.

All in all, these results point to the possibility of a mainly long ETF rotation trading strategy.
I'll need to check whether signals are much overlapping.




Pascal


13051

13052

Rembert
02-28-2012, 06:58 AM
Pascal,

As I understand the 20DMF has a failsafe mechanism that makes it go into neutral when the 2 overbought/oversold levels are breached after a signal miss. Have you considered testing what the impact would be on the IWM robot if the 20DMF doesn't just go into neutral after the above scenario but actually switches it's signal ?

It's just an idea but maybe this could be a simple solution regarding the robot getting stuck in the wrong mode.

Regards,
Rembert

Pascal
02-28-2012, 08:30 AM
Pascal,

As I understand the 20DMF has a failsafe mechanism that makes it go into neutral when the 2 overbought/oversold levels are breached after a signal miss. Have you considered testing what the impact would be on the IWM robot if the 20DMF doesn't just go into neutral after the above scenario but actually switches it's signal ?

It's just an idea but maybe this could be a simple solution regarding the robot getting stuck in the wrong mode.

Regards,
Rembert

On the long side, this fail safe mechanism on the 20DMF was actioned only once.
In hindsight, we could certainly say that switching to a Buy signal on January 4 instead of just reverting to a neutral situation, would have turned the IWM Robot in a Buy mode and it would still be in such a mode, looking to sell on a signal change or on a stop loss breach.


Pascal

Pascal
03-01-2012, 12:54 PM
In the past days, our fellow member Ellis has worked on the Portfolio simulator using the 9 ETFs (XLI, XLE, XLF, etc.)
The summary table is below.

We tested a portfolio of two and three positions. We also tested six trade combinations for each possibility.
B = Buy
B_OS = Buy in Oversold
S = Short
S_OB = Short in Overbought

When for a given day there is the possibility to choose one ETF instead of another, the selection is made on the 20D Price RS. The first table shows a selection on weak price RS. The second shows a selection on strong price RS. There is not much difference, although weak ETFs produce better returns.

You will note that three positions usually produce lower system DD than two positions (this is pretty obvious.)

The annual returns of about 20% are in line with what we can expect of non-leveraged ETFs in sideways markets such as 2010-2011.

A Sharpe ratio above 1 is acceptable for a profitable trading strategy, but it is not exceptional.

The exposure is also something important, together with the return/exposure ratio, as this will give you the sort of return you might expect whenever invested. This also means that when not invested, you can use the funds for other strategies... or preferably, if you have no edge in a GDX/IWM robot, you might use your funds to trade this S&P500 ETFs based strategy. My preference would go to the two Pink/Green highlighted strategies.

Since the model is now operational at least on an EOD base, I will soon publish all the related figures and a summary table that will be daily updated. We might also publish the RT patterns for all these ETFs, but be aware that these could be heavy on the browser.


Pascal

13147

nickola.pazderic
03-01-2012, 02:28 PM
Thanks Pascal, Ellis, and all.

I find it very, very encouraging that so many people have put effort into a sector rotation model.

Since my fingers have been coated with butter for sometime now, please make clear the selection/purchase/sell rules wherever the rotation suggestions are published.

Let me know if I can help write such a section.

Best,

Neil Stoloff
03-01-2012, 10:46 PM
All in all, these results point to the possibility of a mainly long ETF rotation trading strategy.

Pascal




Hi Pascal,

If the testing of different sector ETFs is not yet complete, I must ask: Would the robot's performance likely be enhanced if it traded ETFs that are expected to produce long-term excess returns in their own right? If not, then I guess you can skip the lengthy post below. But if so, I'll ask you to suspend your disbelief and consider an idea that occurred to me when I reflected on this fascinating thread. If you can backtest the idea readily, that would be the way to either refute it or validate it.

The overlay that I have in mind would add a contrarian, mean-reversion element to the robot's momentum-based approach. The combined methods would consider investor behavior across time -- from minutes to days to years -- but would require no change in the robot's design. The only change would be in the vehicles used.

Called SweetSpot, the overlay's real-time track record can be found here (http://sweetspotinvestments.com/?page_id=7). SweetSpot's premise and rationale are discussed in detail elsewhere at the same site, and in a paper that was published last year (http://www.naaim.org/files/2011/L2011_The_Abandonment_Metric_NeilStoloff.pdf) (see the paper's abstract for a quick summary).

Like EV, SweetSpot looks at MF, but measures it annually instead of minute by minute. While EV's universe is constructed from the bottom up, SweetSpot's is top down, defined by the non-diversified funds that are available to retail investors. EV trades with the large players who move prices in the short term, while SweetSpot trades against all (mostly retail) investors at a time when they are likely to be making bad long-term trades.

An ideal backtest would look something like this:

1) The universe would include every sector that offers a representative, liquid ETF, and for which sufficient data are available.

2) Looking separately at EV data for 2007, 2008, 2009, 2010, and 2011, sum up each sector's calendar-year TEV, and rank the sectors for each year in ascending order (from most-negative annual MF to most-positive).

3) Beginning with the 2007 rankings, select the top five or six sectors -- the ones that investors essentially abandoned.

4) Adopt a positive long-term view of the selected sectors (defining "long term" as three years).

5) Generate robot signals for the 2007 selections in 2008, 2009, and 2010.

6) Go long on buy signals; stay long on neutral signals; go to cash on sell signals; go long on neutral signals. (Variations would be worth exploring, but don't go short under any circumstances.)

7) Repeat these steps for 2008 (the only other year when returns can be seen for the entire three-year period).

8) Repeat for 2009 (looking at returns in 2010, 2011, and YTD 2012); 2010 (looking at returns in 2011 and YTD 2012); and 2011 (looking at returns YTD 2012).

The trading strategy described in item #6 mimics one that options traders and others employ when they are long-term bullish on an investment while trading it using a short-term timing strategy that generates both buy and sell signals. They act on the buys and ignore the sells. Ernst Tanaka (among others, not including myself) can probably label this strategy and provide some insight.

If my thesis is correct:

- The backtest will show absolute and risk-adjusted robot returns that exceed those of the robot using any other vehicles you have tested. This result would be explained by the excess buy-and-hold returns of the selected sectors relative to buying and holding a broad-market-index ETF.

- The average buy-and-hold performance of the selected vehicles will become more robust over time. That is, Year Three will outperform Year Two; and Year Two will outperform Year One. This dynamic may be relevant when deciding which ETFs to trade. For example, if you wanted to limit 2012's trading vehicles to six ETFs, you would give preference to the ones added in 2010[!]. (Overlapping test periods would produce a "portfolio" of about 18 candidate ETFs at any given time. Portfolio changes would occur once a year when new sectors are added and old sectors from three years prior are dropped.)

The Short Side

If the long strategy shows promise, it would be worthwhile to test the short side as well. Short candidates would be the sectors with the strongest annual TEV, found at the bottom of each year's rankings. The strategy would be to go short on sell signals and ignore buys.

Unlike the long side, the short side hasn't been tested in real time. Previous backtesting (of short three-year SweetSpot trades) yielded negative returns, probably due to the market's long-term upward bias. That could change, however, when the short-term robot steps in.

My hope is that you can easily test these ideas using the EV universe and database. If they pass muster, I look forward to a fun thread.

Best,

Neil

Disclosure: I registered as an investment adviser in 2008 after trading SweetSpot privately for a small family office from 1998 to 2007. I don't actively market the program, but even if I did, I would not try to market a hands-off strategy like SweetSpot to this group. On the other hand, I did almost contact you about a year ago when the funny money was driving everyone batty. Do you remember posting that you would walk away from active trading if you could find a reliable yield of 5-7 percent above inflation? SweetSpot's long-term numbers are double that, and you would have heard from me except for the "lumpiness" of the returns. SweetSpot is not a coupon, but I do feel it is worth considering as a "Plan B" for any active trader who may decide that the time has come to move on.

Pascal
03-02-2012, 03:35 AM
Hi Pascal,

If the testing of different sector ETFs is not yet complete, I must ask: Would the robot's performance likely be enhanced if it traded ETFs that are expected to produce long-term excess returns in their own right? If not, then I guess you can skip the lengthy post below. But if so, I'll ask you to suspend your disbelief and consider an idea that occurred to me when I reflected on this fascinating thread. If you can backtest the idea readily, that would be the way to either refute it or validate it.

The overlay that I have in mind would add a contrarian, mean-reversion element to the robot's momentum-based approach. The combined methods would consider investor behavior across time -- from minutes to days to years -- but would require no change in the robot's design. The only change would be in the vehicles used.

Called SweetSpot, the overlay's real-time track record can be found here (http://sweetspotinvestments.com/?page_id=7). SweetSpot's premise and rationale are discussed in detail elsewhere at the same site, and in a paper that was published last year (http://www.naaim.org/files/2011/L2011_The_Abandonment_Metric_NeilStoloff.pdf) (see the paper's abstract for a quick summary).

Like EV, SweetSpot looks at MF, but measures it annually instead of minute by minute. While EV's universe is constructed from the bottom up, SweetSpot's is top down, defined by the non-diversified funds that are available to retail investors. EV trades with the large players who move prices in the short term, while SweetSpot trades against all (mostly retail) investors at a time when they are likely to be making bad long-term trades.

An ideal backtest would look something like this:

1) The universe would include every sector that offers a representative, liquid ETF, and for which sufficient data are available.

2) Looking separately at EV data for 2007, 2008, 2009, 2010, and 2011, sum up each sector's calendar-year TEV, and rank the sectors for each year in ascending order (from most-negative annual MF to most-positive).

3) Beginning with the 2007 rankings, select the top five or six sectors -- the ones that investors essentially abandoned.

4) Adopt a positive long-term view of the selected sectors (defining "long term" as three years).

5) Generate robot signals for the 2007 selections in 2008, 2009, and 2010.

6) Go long on buy signals; stay long on neutral signals; go to cash on sell signals; go long on neutral signals. (Variations would be worth exploring, but don't go short under any circumstances.)

7) Repeat these steps for 2008 (the only other year when returns can be seen for the entire three-year period).

8) Repeat for 2009 (looking at returns in 2010, 2011, and YTD 2012); 2010 (looking at returns in 2011 and YTD 2012); and 2011 (looking at returns YTD 2012).

The trading strategy described in item #6 mimics one that options traders and others employ when they are long-term bullish on an investment while trading it using a short-term timing strategy that generates both buy and sell signals. They act on the buys and ignore the sells. Ernst Tanaka (among others, not including myself) can probably label this strategy and provide some insight.

If my thesis is correct:

- The backtest will show absolute and risk-adjusted robot returns that exceed those of the robot using any other vehicles you have tested. This result would be explained by the excess buy-and-hold returns of the selected sectors relative to buying and holding a broad-market-index ETF.

- The average buy-and-hold performance of the selected vehicles will become more robust over time. That is, Year Three will outperform Year Two; and Year Two will outperform Year One. This dynamic may be relevant when deciding which ETFs to trade. For example, if you wanted to limit 2012's trading vehicles to six ETFs, you would give preference to the ones added in 2010[!]. (Overlapping test periods would produce a "portfolio" of about 18 candidate ETFs at any given time. Portfolio changes would occur once a year when new sectors are added and old sectors from three years prior are dropped.)

The Short Side

If the long strategy shows promise, it would be worthwhile to test the short side as well. Short candidates would be the sectors with the strongest annual TEV, found at the bottom of each year's rankings. The strategy would be to go short on sell signals and ignore buys.

Unlike the long side, the short side hasn't been tested in real time. Previous backtesting (of short three-year SweetSpot trades) yielded negative returns, probably due to the market's long-term upward bias. That could change, however, when the short-term robot steps in.

My hope is that you can easily test these ideas using the EV universe and database. If they pass muster, I look forward to a fun thread.

Best,

Neil

Disclosure: I registered as an investment adviser in 2008 after trading SweetSpot privately for a small family office from 1998 to 2007. I don't actively market the program, but even if I did, I would not try to market a hands-off strategy like SweetSpot to this group. On the other hand, I did almost contact you about a year ago when the funny money was driving everyone batty. Do you remember posting that you would walk away from active trading if you could find a reliable yield of 5-7 percent above inflation? SweetSpot's long-term numbers are double that, and you would have heard from me except for the "lumpiness" of the returns. SweetSpot is not a coupon, but I do feel it is worth considering as a "Plan B" for any active trader who may decide that the time has come to move on.

This idea might be interesting, but impossible for me to backtest, as I'd have to build models for all these ETFs, many of them are illiquid and hence unusable for teh EV method.

Pascal

Pascal
03-02-2012, 03:36 AM
Thanks Pascal, Ellis, and all.

I find it very, very encouraging that so many people have put effort into a sector rotation model.

Since my fingers have been coated with butter for sometime now, please make clear the selection/purchase/sell rules wherever the rotation suggestions are published.

Let me know if I can help write such a section.

Best,

You probably could: could you thirnk of a simple one information page for the 9 ETFs. What should be put in so that everyone has an easy access and decisions can be quickly taken?

Thanks



Pascal

nickola.pazderic
03-02-2012, 12:50 PM
Keeping it super simple, I suggest a basic design of this sort:

13176


Actions could be colored: Green/Buy; Yellow/Cash; Red/Sell.

My idea is to make it simple enough so that professional simpletons, like me, can catch the message without a scratch of the scalp.

The wonderful engineering developments, percentages, philosophies, etc., should be provided as links.

I would be more than happy to help write any prose section that describes, for example, the EV logic behind the models.

This is my suggestion; others, of course, may well be superior.

nickola.pazderic
03-02-2012, 02:36 PM
Looking at the suggestion above, I see where some confusion could emerge.

Date/Time and Price-- These should all refer to the time and price recorded when the signal changed.

Neil Stoloff
03-03-2012, 01:29 AM
This idea might be interesting, but impossible for me to backtest, as I'd have to build models for all these ETFs, many of them are illiquid and hence unusable for teh EV method.

Pascal

Pascal,

In item #1 of my proposal I recommended limiting the universe to liquid ETFs. Wouldn't that still leave a sizable universe?

For purposes of testing the concept, what if you were given each year's picks and only had to test their performance as robot vehicles? This could be accomplished by using the liquid ETFs that were available as SweetSpot picks in 2007-2012. You would have to make a leap of faith that the abandoned sectors identified by SweetSpot's method for calculating fund flows would be similar in character to those identified by looking at year-by-year MF for EV sectors. To give you a feel for whether such a leap would be reasonable, here's SweetSpot's simple method:

1) The data points are beginning-of-year (BOY) and end-of-year (EOY) sector assets, and sector returns for the year-just-ended.

2) Adjust BOY assets for returns (+% gain or -% loss) to calculate hypothetical EOY assets as if there were no fund flows (MF).

3) Calculate MF by subtracting these hypothetical EOY assets from actual EOY assets.

4) Calculate percentage MF by dividing MF into BOY assets.

5) Enter long-term positions in the sectors with the highest-percentage negative MF in the year-just-ended.

I share the view expressed here by Billy and others (most recently Nickola) that we should keep things simple. SweetSpot is nothing if not simple, and its historical excess returns offer the potential to significantly enhance the robot's performance. Wouldn't it be worthwhile to try to find a way to test this potential? For my part, in addition to the publicly available completed SweetSpot trades entered in 2007, 2008, and 2009, I am willing to share proprietary open positions that were entered in 2010, 2011, and 2012.

Neil

Pascal
03-03-2012, 02:00 AM
Pascal,

In item #1 of my proposal I recommended limiting the universe to liquid ETFs. Wouldn't that still leave a sizable universe?

For purposes of testing the concept, what if you were given each year's picks and only had to test their performance as robot vehicles? This could be accomplished by using the liquid ETFs that were available as SweetSpot picks in 2007-2012. You would have to make a leap of faith that the abandoned sectors identified by SweetSpot's method for calculating fund flows would be similar in character to those identified by looking at year-by-year MF for EV sectors. To give you a feel for whether such a leap would be reasonable, here's SweetSpot's simple method:

1) The data points are beginning-of-year (BOY) and end-of-year (EOY) sector assets, and sector returns for the year-just-ended.

2) Adjust BOY assets for returns (+% gain or -% loss) to calculate hypothetical EOY assets as if there were no fund flows (MF).

3) Calculate MF by subtracting these hypothetical EOY assets from actual EOY assets.

4) Calculate percentage MF by dividing MF into BOY assets.

5) Enter long-term positions in the sectors with the highest-percentage negative MF in the year-just-ended.

I share the view expressed here by Billy and others (most recently Nickola) that we should keep things simple. SweetSpot is nothing if not simple, and its historical excess returns offer the potential to significantly enhance the robot's performance. Wouldn't it be worthwhile to try to find a way to test this potential? For my part, in addition to the publicly available completed SweetSpot trades entered in 2007, 2008, and 2009, I am willing to share proprietary open positions that were entered in 2010, 2011, and 2012.

Neil

Neil, let's put this the other way: what sort of data would you need from me in order to test this concept?
I only have a limited set of ETFs with available MF models.

Pascal

Neil Stoloff
03-03-2012, 07:45 AM
Neil, let's put this the other way: what sort of data would you need from me in order to test this concept?
I only have a limited set of ETFs with available MF models.

Pascal

Pascal,

For now I would need a list of the ETFs for which you have available MF models. Then, if there's enough overlap between your list and mine, I would need robot signals by ticker and year for the ETFs that I identify. I would calculate returns to enable an "apples to apples" comparison to other sector-rotation models.

Neil

Pascal
03-03-2012, 07:48 AM
Pascal,

For now I would need a list of the ETFs for which you have available MF models. Then, if there's enough overlap between your list and mine, I would need robot signals by ticker and year for the ETFs that I identify. I would calculate returns to enable an "apples to apples" comparison to other sector-rotation models.

Neil

I only have the 9 XLI, XLY, etc and GDX. That makes 10 ETFs.
What do you call "signal by year"? I work on a 20D time frame.


Pascal

Neil Stoloff
03-03-2012, 08:10 AM
I only have the 9 XLI, XLY, etc and GDX. That makes 10 ETFs.
What do you call "signal by year"? I work on a 20D time frame.


Pascal


On one hand, ten may not be enough for a meaningful test. On the other, it would be easy to test such a small number. (We may want to look at IWM as #11, even if it's not exactly a sector fund.)

As for "signals by year:" At the beginning of each calendar year, new ETFs would be added and old ones dropped. I suppose a dropped fund could continue to be held until the robot issues a signal to go to cash (if it's not already in cash).

I hope I'm making sense.

Neil

Neil Stoloff
03-04-2012, 09:22 PM
I only have the 9 XLI, XLY, etc and GDX. That makes 10 ETFs.
What do you call "signal by year"? I work on a 20D time frame.


Pascal


Good news, Pascal. I believe that I was able to prove my thesis. Before I report, I'd like to clarify my initial request that you (and others) suspend your disbelief.

In your book you stated more than once that when you refer to the assessment of value, you are speaking in terms of trading opportunity. You distinguish this from the fundamental or intrinsic value of a stock, which your book does not address. Similarly, Billy has stated more than once that it is simply not possible to trade on fundamentals.

I'm sure that you are both right in that fundamentals are useless as criteria for entering and exiting short-term trades. The robot cannot consider fundamentals. But the robot CAN trade the vehicles that you choose for it. Please consider the possibility that fundamental factors can inform your choice of trading vehicles in a way that is likely to improve the robot's overall returns without requiring any change in the model.

Refer to your post of 2/14 (subject: "XL Models, continuation"), which is the second-to-last post on p. 4 of the Model discussion. You published back-tested model returns for each of nine sector ETFs, along with corresponding benchmark sector returns. Your focus was on the 2010-11 period.

I ranked the model returns by sector in 2010-11, and then compared that to a ranking of the sectors. I found:

- The top-ranked model sector (XLY) was also the top-ranked sector.

- The bottom-ranked model sector (XLF) was also the bottom ranked sector (suggesting shorting potential).

- The three top-ranked model sectors (XLY, XLE, and XLI) were all among the four top-ranked sectors.


http://sweetspotinvestments.com/wp-content/uploads/model-vs-sectors-2010-11.bmp

Even with such a small sample size, these findings seem to answer my original question: "Would the robot's performance likely be enhanced if it traded ETFs that are expected to produce long-term excess returns in their own right?" The answer is yes.

But how do you identify sectors that are likely to outperform for a period of years? I have suggested a well-supported method with a solid real-time track record (http://sweetspotinvestments.com/?page_id=7) (better than a backtest). Moreover, the method was not my own invention, but was independently suggested by Morningstar and Lipper in a 1998 WSJ article reporting on their research:



Buying selected lagging categories and lightening up on the leading categories is essentially a form of buying low and selling high. While it sounds smart, though, it is tough psychologically. Indeed, investors feel far more comfortable jumping into categories that have done well and bailing out of the laggards. [Ed. note: When it comes to investing, comfort is overrated.]

The result: “People tend to buy particular segments of the market as they are topping out, and they tend to pull out of sectors of the market as they are bottoming,” says Susan Dziubinski, editor of Morningstar’s monthly Morningstar Fund Investor publication. “Investors tend not to have great timing.”

Intrepid bargain hunters might want to look not at funds with big losses, but rather at those categories that have seen the biggest outflows of investor dollars, Ms. Dziubinski suggests. That has been a winning strategy over the years, Morningstar has found…

(See Damato, Karen, "Emerging Markets Trail Rally but May Be Bargains for the Intrepid," Wall Street Journal; New York; by Karen Damato (Dec. 7, 1998). Start page: A11; ISSN: 00999660.)

Sometimes you get lucky and the backtesting is done for you...

Implementing this idea may require tracking -- and entering trades based on -- MF going forward for an assortment of liquid ETFs drawn from more than just these nine sectors. Do you have that capability? That is, are you able to track MF in real time for all sectors in your universe? [The number to trade in any given year will be small, but the universe from which each year's candidates are drawn should be large. SweetSpot's universe is ~100 sectors, many or most of which include at least one highly liquid ETF (depending on how you define "highly liquid").]

Cheers,

Neil

Pascal
03-05-2012, 05:08 AM
Good news, Pascal. I believe that I was able to prove my thesis. Before I report, I'd like to clarify my initial request that you (and others) suspend your disbelief.

In your book you stated more than once that when you refer to the assessment of value, you are speaking in terms of trading opportunity. You distinguish this from the fundamental or intrinsic value of a stock, which your book does not address. Similarly, Billy has stated more than once that it is simply not possible to trade on fundamentals.

I'm sure that you are both right in that fundamentals are useless as criteria for entering and exiting short-term trades. The robot cannot consider fundamentals. But the robot CAN trade the vehicles that you choose for it. Please consider the possibility that fundamental factors can inform your choice of trading vehicles in a way that is likely to improve the robot's overall returns without requiring any change in the model.

Refer to your post of 2/14 (subject: "XL Models, continuation"), which is the second-to-last post on p. 4 of the Model discussion. You published back-tested model returns for each of nine sector ETFs, along with corresponding benchmark sector returns. Your focus was on the 2010-11 period.

I ranked the model returns by sector in 2010-11, and then compared that to a ranking of the sectors. I found:

- The top-ranked model sector (XLY) was also the top-ranked sector.

- The bottom-ranked model sector (XLF) was also the bottom ranked sector (suggesting shorting potential).

- The three top-ranked model sectors (XLY, XLE, and XLI) were all among the four top-ranked sectors.


http://sweetspotinvestments.com/wp-content/uploads/model-vs-sectors-2010-11.bmp

Even with such a small sample size, these findings seem to answer my original question: "Would the robot's performance likely be enhanced if it traded ETFs that are expected to produce long-term excess returns in their own right?" The answer is yes.

But how do you identify sectors that are likely to outperform for a period of years? I have suggested a well-supported method with a solid real-time track record (http://sweetspotinvestments.com/?page_id=7) (better than a backtest). Moreover, the method was not my own invention, but was independently suggested by Morningstar and Lipper in a 1998 WSJ article reporting on their research:



Buying selected lagging categories and lightening up on the leading categories is essentially a form of buying low and selling high. While it sounds smart, though, it is tough psychologically. Indeed, investors feel far more comfortable jumping into categories that have done well and bailing out of the laggards. [Ed. note: When it comes to investing, comfort is overrated.]

The result: “People tend to buy particular segments of the market as they are topping out, and they tend to pull out of sectors of the market as they are bottoming,” says Susan Dziubinski, editor of Morningstar’s monthly Morningstar Fund Investor publication. “Investors tend not to have great timing.”

Intrepid bargain hunters might want to look not at funds with big losses, but rather at those categories that have seen the biggest outflows of investor dollars, Ms. Dziubinski suggests. That has been a winning strategy over the years, Morningstar has found…

(See Damato, Karen, "Emerging Markets Trail Rally but May Be Bargains for the Intrepid," Wall Street Journal; New York; by Karen Damato (Dec. 7, 1998). Start page: A11; ISSN: 00999660.)

Sometimes you get lucky and the backtesting is done for you...

Implementing this idea may require tracking -- and entering trades based on -- MF going forward for an assortment of liquid ETFs drawn from more than just these nine sectors. Do you have that capability? That is, are you able to track MF in real time for all sectors in your universe? [The number to trade in any given year will be small, but the universe from which each year's candidates are drawn should be large. SweetSpot's universe is ~100 sectors, many or most of which include at least one highly liquid ETF (depending on how you define "highly liquid").]

Cheers,

Neil

Neil,


Thank you for your work.

I do not doubt the sweetspot theory. What I understand though is how impractical its implementation would be with the MF method, because the MF method is calculation intensive and requires much preparation time for each ETF.

To build one trading model on one ETF, I indeed need to have a few years of EV data for each component of that ETF (This means that I cannot do anything for ETFs that are pure derivative products, such as leveraged ETFs or ETF that track the price of a commodity future such as GLD.) I then need to find the weight of each component and the weight method of the ETF. After that, it is a matter of number crunching by applying the existing trading model.

Compared to the potential benefits, the work that I'd have to carry out to bring this idea to life does not make this project one of the most attractive.



Pascal

Neil Stoloff
03-05-2012, 10:13 PM
Compared to the potential benefits, the work that I'd have to carry out to bring this idea to life does not make this project one of the most attractive.

Pascal


This is disappointing on more than one level, Pascal. At the beginning of my first post in this thread I asked if using ETFs with a long-term edge would be worth considering for the robot. Your post above would have answered that question, sparing me the time and energy that I put into my later posts.

This must be how Lyndon Johnson felt during the Vietnam war when he kept sending in more soldiers with the hope that those who had died did not die in vain. (Alas, they did.) I won't belabor things, but at least let me see if I can gain a better understanding of this outcome. Perhaps others can pipe up to relieve you as the sole respondent.

Regarding the potential benefits of my idea, let's use your nine sectors as a proxy. The potential (or best possible) benefit would be seen when the best-performing model sectors and the best-performing sectors are the same. Looking at your 2010-11 numbers, the model's average two-year return for all nine sectors is 48.3%, whereas the average of the model's top-three returns is 72.1%. That's a difference of 23.8 percentage points, or a 49.3% greater return (24.6% annualized). Factor in the effects of compounding, and your conclusion makes no sense to me unless you understated the impracticality of implementing the idea (meaning that it's virtually impossible). Honestly, Pascal, I have a feeling -- from all of your replies -- that you gave little or no thought to possible ways that my idea might work, but only to the reasons why it wouldn't. For example:

If the limitations that you cited above do not apply to your nine sector ETFs -- the ones that you have been able to evaluate in detail and are considering for trading -- then my idea might be applied to them in a limited way. Although none of those ETFs are in sectors that are current SweetSpot picks, some of them have been in the past and any of them could be in the future. If and when they are, they might be worth considering for robot trading. Or even better, maybe someone knows of a proven method -- SweetSpot aside -- for ranking the nine sectors at any given time in terms of their long-term prospects. How could it not be worthwhile to explore such a potentially powerful overlay?

Respectfully,

Neil

Pascal
03-06-2012, 01:13 AM
This is disappointing on more than one level, Pascal. At the beginning of my first post in this thread I asked if using ETFs with a long-term edge would be worth considering for the robot. Your post above would have answered that question, sparing me the time and energy that I put into my later posts.


Neil,


I am sorry that you are disappointed. If you go back to our discussion, you will note that I tried to understand what you needed to make your idea work and I explained that I only had 10 ETFs for which a MF model was existing. It is only in your last post that you implied the need to have 100 ETFs (because 100 Sectors.) So, it is only then that I realized this was not practical. I simply could not do such a work.



This must be how Lyndon Johnson felt during the Vietnam war when he kept sending in more soldiers with the hope that those who had died did not die in vain. (Alas, they did.) I won't belabor things, but at least let me see if I can gain a better understanding of this outcome. Perhaps others can pipe up to relieve you as the sole respondent.

Regarding the potential benefits of my idea, let's use your nine sectors as a proxy. The potential (or best possible) benefit would be seen when the best-performing model sectors and the best-performing sectors are the same. Looking at your 2010-11 numbers, the model's average two-year return for all nine sectors is 48.3%, whereas the average of the model's top-three returns is 72.1%. That's a difference of 23.8 percentage points, or a 49.3% greater return (24.6% annualized). Factor in the effects of compounding, and your conclusion makes no sense to me unless you understated the impracticality of implementing the idea (meaning that it's virtually impossible). Honestly, Pascal, I have a feeling -- from all of your replies -- that you gave little or no thought to possible ways that my idea might work, but only to the reasons why it wouldn't.




Your idea might work as the two years example suggests. But in reality you do not need me to apply it to the 10 ETFs. I believe that you probably have gained an edge by merging two ideas into one trading system. We will be supplying Buy/sell signals and RT data on the 9 ETFs and it will be up to each trader to select the ETF he prefers to trade. So, I indeed believe that your selection through the use of the SweetSpot method is just fine.

We cannot incorporate Sweetspot into a trading system ourselves here because of intellectual property right issues. However, any individual can do that and I greatly encourage it.

If I come with some other selection criteria for each ETF, I will post them. For exaple, we tested a 20D Price RS selection criteria, but we might simply trade the three ETFs whose price RS has been the worst for the past 6 months. I did not test that idea, but it would be worthy to do it.





For example:

If the limitations that you cited above do not apply to your nine sector ETFs -- the ones that you have been able to evaluate in detail and are considering for trading -- then my idea might be applied to them in a limited way. Although none of those ETFs are in sectors that are current SweetSpot picks, some of them have been in the past and any of them could be in the future. If and when they are, they might be worth considering for robot trading. Or even better, maybe someone knows of a proven method -- SweetSpot aside -- for ranking the nine sectors at any given time in terms of their long-term prospects. How could it not be worthwhile to explore such a potentially powerful overlay?

Respectfully,

Neil


I think that it is a good idea. You did the work for the past two years, which is the time period for which reliable data is available on the 9 ETFs. So, if anyone wants to check another ranking/selection method for the 9 ETFs, I can supply the set of signals generated for the 9 ETFs in the past two years.


Pascal

Pascal
03-06-2012, 01:17 AM
Looking at the suggestion above, I see where some confusion could emerge.

Date/Time and Price-- These should all refer to the time and price recorded when the signal changed.

Thank you Nickola.

I am working on your suggestion.



Pascal

senco
03-08-2012, 12:15 AM
Pascal,
For clarification on the post about portfolio simulation of the 9 ETFs from 3-1-2012:
- >>When for a given day there is the possibility to choose one ETF instead of another, the selection is made on the 20D Price RS.
Does this statement relate to the day of position entry only? My interpretation is 'yes', when a position is entered it is held until a state change, and one does not switch ETFs in the middle of a trade due to 20D RS

- Two Positions, Three Positions - does this relate to the total number of positions, or to the total number of positions in the same direction?

Intuitively, one could obtain higher gains as well as a better reward/risk by dividing the capital allocated to the strategy by three, and allow up to three positions in each direction. For example, following 3/6/12 signals we would have 3 short and 2 long positions. Portfolio-margin accounts will usually make this easy to handle.

Pascal
03-08-2012, 12:25 AM
Pascal,
For clarification on the post about portfolio simulation of the 9 ETFs from 3-1-2012:
- >>When for a given day there is the possibility to choose one ETF instead of another, the selection is made on the 20D Price RS.
Does this statement relate to the day of position entry only? My interpretation is 'yes', when a position is entered it is held until a state change, and one does not switch ETFs in the middle of a trade due to 20D RS
.

This relates to the day of position entry only.



- Two Positions, Three Positions - does this relate to the total number of positions, or to the total number of positions in the same direction?

This relates to the total number of positions



Intuitively, one could obtain higher gains as well as a better reward/risk by dividing the capital allocated to the strategy by three, and allow up to three positions in each direction. For example, following 3/6/12 signals we would have 3 short and 2 long positions. Portfolio-margin accounts will usually make this easy to handle.

In fact, after Neil's discussion, I noted that there can be many ways to select the best ETFs in a selection of 9 ETFs. There are also many possible ways to organise a portfolio?

In such conditions, I believe that the best is to provide the trade signals on each ETF independently and let people select those that they prefer.


Pascal

pdp-brugge
03-19-2012, 03:19 AM
It is interesting to see the following:
- There are more Buy than Short trades, because there is an ATR filter set on short trades.
- Short trade have a winning/losing days ratio lower than 1, even though the trade outcome is in general positive. This mean that short trades are more volatile than long ones. We might also expect that most of the drawdowns will occur during missed short trades.


Pascal,

This quote comes from a post made on February 26.

Could you disclose the ATR filter you used for the filter on the short trades?
Did you consider to use an ATR filter on the long trades too?
If yes, what was your conclusion about an ATR filter for the long side?

PdP

Pascal
03-19-2012, 04:46 AM
Pascal,

This quote comes from a post made on February 26.

Could you disclose the ATR filter you used for the filter on the short trades?
Did you consider to use an ATR filter on the long trades too?
If yes, what was your conclusion about an ATR filter for the long side?

PdP

Sorry, but I need to hold-off from responding to model "back ground" questions for now.
We are working to improve the RT system (bring in more RT) and it is difficult to go back and for between the two types of activities (board management and R&D.)


Pascal

pdp-brugge
03-19-2012, 05:08 AM
No problem.

Take all the time that you need and make the RT system as good as you can
This will be off greater advantage to the group!

PdP

pdp-brugge
03-23-2012, 06:04 AM
It is just an idea...
One of the issues with viewing the RT graphs is the impact on the performance viewing it in a browser.
Would a solution not be to create a separate webpage for each model?
Now the 20DMF RT and the GDX RT are one page.
If on one (sunny?) day the 9 other RT models are joined for the different S&P sectors, that will mean 11 RT graphs.
Maybe a separate page for each RT model...
Just a thought...

PdP

Pascal
03-23-2012, 06:08 AM
It is just an idea...
One of the issues with viewing the RT graphs is the impact on the performance viewing it in a browser.
Would a solution not be to create a separate webpage for each model?
Now the 20DMF RT and the GDX RT are one page.
If on one (sunny?) day the 9 other RT models are joined for the different S&P sectors, that will mean 11 RT graphs.
Maybe a separate page for each RT model...
Just a thought...

PdP

Sure!
Each ETF will have its own RT page under the S&P500 tab.
We will also split the 20DMF from the GDX RT.


Pascal

pdp-brugge
03-23-2012, 06:10 AM
Great!
Now the billion dollar question: when???
Just teasing...

TraderD
03-23-2012, 08:05 AM
Great!
Now the billion dollar question: when???
Just teasing...

Why not now?

http://www.effectivevolume.com/content.php?1054-20DMF-Real-Time-View

http://www.effectivevolume.com/content.php?1055-gdx-rt

Trader D

pdp-brugge
03-23-2012, 09:09 AM
wow!
is this official?

TraderD
03-23-2012, 11:48 AM
Hi Pascal,

GDX/RT MF dipped to -1.75% on 3/14 (intraday low price was $49.69). Since then MF is making higher highs with price making lower lows. MF just crossed 0% upward and price is slightly over the 3/14 intraday low. Would you interpret the entire period from 3/14 to now as accumulation by large players?

Trader D

Pascal
03-23-2012, 12:01 PM
Hi Pascal,

GDX/RT MF dipped to -1.75% on 3/14 (intraday low price was $49.69). Since then MF is making higher highs with price making lower lows. MF just crossed 0% upward and price is slightly over the 3/14 intraday low. Would you interpret the entire period from 3/14 to now as accumulation by large players?

Trader D

Yes. I believe that these are deep pockets that accumulate for the longer term than most of us.

Pascal

Billy
03-23-2012, 12:31 PM
Yes. I believe that these are deep pockets that accumulate for the longer term than most of us.

Pascal

I would also interpret that the MF detected “smart” large players starting to cover short positions from the 3/14 intraday oversold level and accumulating for LT positions yesterday and today with the cross over the average MF.
Billy

TraderD
03-23-2012, 12:38 PM
Yes. I believe that these are deep pockets that accumulate for the longer term than most of us.
Pascal

Thanks, Pascal. That immediately begs the question - if the large players are gearing up their positions for a longer-term uptrend than our swing trading intent, wouldn't that subject us to more whipsaws and being knocked out of positions while the large players enjoy the luxury of optimizing their positions' average costs? What else could we possibly do to circumvent that?

Trader D

Billy
03-23-2012, 01:03 PM
Thanks, Pascal. That immediately begs the question - if the large players are gearing up their positions for a longer-term uptrend than our swing trading intent, wouldn't that subject us to more whipsaws and being knocked out of positions while the large players enjoy the luxury of optimizing their positions' average costs? What else could we possibly do to circumvent that?

Trader D

Doron,
Why do you think it would be different this time? It always happens that way and the models have been developed with backtesting and statistical observation of the way it happened in the past. No model can be successful all of the time and some disappointing losing trades always come along the way. But objectivity for optimal LT performance forces us to follow the model rules to circumvent guessing or fearing subjective and emotional interpretations.
Billy

TraderD
03-23-2012, 01:12 PM
Doron,
Why do you think it would be different this time? It always happens that way and the models have been developed with backtesting and statistical observation of the way it happened in the past. No model can be successful all of the time and some disappointing losing trades always come along the way. But objectivity for optimal LT performance forces us to follow the model rules to circumvent guessing or fearing subjective and emotional interpretations.
Billy

Billy, I'm with you on that one, of course. My question merely highlights the observation (made by Pascal) that the underlying theme the EV models follow have a longer perspective and simply "reacting faster" can back fire in the form of excess whipsaws. I'm not suggesting that there is a better solution, just that what's been done now is sub-optimal from a timeframe perspective. I hope that makes sense.

Trader D

Billy
03-23-2012, 01:20 PM
Billy, I'm with you on that one, of course. My question merely highlights the observation (made by Pascal) that the underlying theme the EV models follow have a longer perspective and simply "reacting faster" can back fire in the form of excess whipsaws. I'm not suggesting that there is a better solution, just that what's been done now is sub-optimal from a timeframe perspective. I hope that makes sense.

Trader D

If you aim at longer term perspectives, it’s probably better to focus on the EOD robot and not to worry about its occasional divergences with the RT model which can be more “noisy” and “whipsawing”.
Billy

Rembert
05-11-2012, 02:02 AM
For the official trade records of IWM/GDX models ... do you have a record of the initial stop for each trade ?
If so could you share a list of the trades including buy/short price, sell/cover price and intitial stop ?

I would like to do some calculations to get a better idea how much of my account I should risk on each trade.
I'll share my findings with the group as the question regarding position sizing has come up a few times lately.

Thanks,
Rembert

Pascal
05-11-2012, 04:22 AM
Rembert,


I do not keep track of the initial stops that were used in the trades. These stops depend on the volatility.


Pascal

Rembert
05-11-2012, 04:43 AM
Ok. If by chance there's anyone who has kept track of it ... I would appreciate it if you could share this info.

pdp-brugge
05-11-2012, 05:13 AM
Rembert,

Hereby I attach an excel file that I once received from Shawn Molodow.
I have simplified it and kept track of the robot settings each day since their start.
A few days are missing. Those where days when I was not able to register the data.

Veel succes ! (I have a feeling that you are, as me, Dutch speaking)

PdP

14190

Rembert
05-11-2012, 06:00 AM
Bedankt !

Rembert
05-11-2012, 02:23 PM
Here's the IWM data ...

Some notes :


The first few trades are missing because I don't have the initial stop loss data.
Transaction costs are not taken into account.
Data is collected from various sources. I cannot garantee there are no mistakes in it.


http://alfa.ddns.net:1084/images/iwm.jpg

For each trade I've calculated the R value. R is a measure of risk versus reward. It indicates how much you've gained or lost on the trade compared to how much initial risk you had on the trade. For example a trade with an end result of 2 R made twice as much as you risked on the trade. More info on this concept here : http://www.iitm.com/sm-risk-and-r-multiples.htm

Next I've calculated some KPI's based on these R values that give an overview of the system.

Expectancy shows us the average R value of all trades. In other words what to expect from a new trade based on the history. More info on this concept here : http://www.iitm.com/sm-Expectancy.htm

The standard deviation shows us how much a trade on average deviates from the expectancy. The lower this number the more consistent a sytem is considered to be.

SQN (System Quality Number) is a proprietary measure of the quality of a trading system as developed by Dr. Van Tharp. SQN measures the relationship between the mean (expectancy) and the standard deviation of the R-multiple distribution generated by a trading system. The better the SQN, the easier it is to use various position sizing strategies to meet one’s objectives.

http://alfa.ddns.net:1084/images/sqn.jpg

I started this exercise to have a look at what position sizing strategy would be appropriate for the IWM robot. Based on the trade history we have to see the reality that the expectancy is negative which also translates into a negative SQN. Tharp does not advise to trade systems with a SQN below 1 and so there are no position sizing strategies or guidelines.

I sincerely hope the IWM robot evolves into a great system. For the moment I have to consider to stop trading the robot or use a minimal position sizing. I'll continue to monitor the statistics going foreward and will re-evaluate when there is an improvement.

Pascal
05-11-2012, 02:37 PM
Here's the IWM data ...

Some notes :


The first few trades are missing because I don't have the initial stop loss data.
Transaction costs are not taken into account.
Data is collected from various sources. I cannot garantee there are no mistakes in it.


http://alfa.ddns.net:1084/images/iwm.jpg

For each trade I've calculated the R value. R is a measure of risk versus reward. It indicates how much you've gained or lost on the trade compared to how much initial risk you had on the trade. For example a trade with an end result of 2 R made twice as much as you risked on the trade. More info on this concept here : http://www.iitm.com/sm-risk-and-r-multiples.htm

Next I've calculated some KPI's based on these R values that give an overview of the system.

Expectancy shows us the average R value of all trades. In other words what to expect from a new trade based on the history. More info on this concept here : http://www.iitm.com/sm-Expectancy.htm

The standard deviation shows us how much a trade on average deviates from the expectancy. The lower this number the more consistent a sytem is considered to be.

SQN (System Quality Number) is a proprietary measure of the quality of a trading system as developed by Dr. Van Tharp. SQN measures the relationship between the mean (expectancy) and the standard deviation of the R-multiple distribution generated by a trading system. The better the SQN, the easier it is to use various position sizing strategies to meet one’s objectives.

http://alfa.ddns.net:1084/images/sqn.jpg

I started this exercise to have a look at what position sizing strategy would be appropriate for the IWM robot. Based on the trade history we have to see the reality that the expectancy is negative which also translates into a negative SQN. Tharp does not advise to trade systems with a SQN below 1 and so there are no position sizing strategies or guidelines.

I sincerely hope the IWM robot evolves into a great system. For the moment I have to consider to stop trading the robot or use a minimal position sizing. I'll continue to monitor the statistics going foreward and will re-evaluate when there is an improvement.

Thank you for this analysis Rembert.


We have all noted for some time that the IWM Robot was not performing well.
No need to make a sophisticated analysis.

I have adapted the 20DMF direction model and the LT/ST edges in order to see if this situation can somehow improve.

We will give it a few more months to decide on what to do. By July/August we will have a one year experience and then we will see if it is worthwhile to continue or not. We might switch to the simpler MF RT models, with no specific entries/exists, but nothing has been decided.


Pascal

Rembert
05-11-2012, 02:56 PM
We have all noted for some time that the IWM Robot was not performing well. No need to make a sophisticated analysis.

I'm doing this analysis for all the systems I have in use, including my disc trading to have a better understanding of them in general. And for position sizing.


I have adapted the 20DMF direction model and the LT/ST edges in order to see if this situation can somehow improve.

I appreciate your efforts to improve the robot. Let's hope it makes an impact. No doubt the 20DMF protection so it doesn't get stuck in the wrong mode is an important one.

Harry
05-12-2012, 05:33 AM
Rembert, thank you for posting, very informative.

I am wondering if you performed a similar analysis for the GDX Robot?

Rembert
05-12-2012, 08:39 AM
Rembert, thank you for posting, very informative.

I am wondering if you performed a similar analysis for the GDX Robot?

Not yet for GDX. Next week probably.

Pascal
05-12-2012, 10:11 AM
Not yet for GDX. Next week probably.

For your ease, I attach a file with the GDX trade records.
Since inception, the EOD GDX Robot produced a return of 13.98%.
The EOD combined to the RT GDX MF with all the signals produced 25.10%.
The EOD combined to the RT GDX MF with only the strong signals produced 32.33%.

These results are not exceptional, but they are not catastrophic either (Buy and hold produced a return of -28.06%.)

This trading environment is very challenging to say the least.



Pascal

14197

Harry
05-14-2012, 07:51 AM
Dear Pascal,

Thank you for the updated trades and thoughts. Speaking for myself, a RT model is not much use unless a RT email alert system is developed. Even so, it still may be of no use as I spend a large part of my day away from the desk.

If time determines the RT system to be the way to go, I wonder if you have considered using IB or Tradestation and setting up an account where others can link and mirror your account's trades? With IB I believe you can earn a % profit (not sure about Tradestation)? Anyway, this may be an option for those not near the PC during market hours?

Rembert
05-16-2012, 12:16 PM
Let's have a look at the GDX robot. For an explanation of some terms used here I refer to my IWM robort analysis on the previous page. For some EOD trades I don't have the initial stop data so I replaced those by an average stop as calculated from PdP's Excel file. Regarding the real time models I did the same but used the average for all trades. This means the results will not be entirely accurate so take them with a grain of salt. I think the general picture will not be too far off regardless.

Let's start with the EOD model ...

http://alfa.ddns.net:1084/images/GDX_EOD.JPG

Overall the numbers are an improvement compared to IWM. Based on the trade history GDX seems to have a positive (albeit small) expectancy and a low standard deviation resulting into a SQN number of about 1.65 (see previous post for SQN interpretation table).

The GDX model rules ensure that a position is often closed before the full R1 stop is hit or a high R multiple of profit is reached. This translates into a low standard deviation helping the model be more consistent. Tough the large initial stops makes it hard to get a high R profit resulting in a lower expectancy.

What about position sizing ? Well that's up to each individual. As a guideline one can have a look at these tables using the SQN number. Keep in mind these drawdowns are for the entire account and on top of any other mech/disc trading you might be doing.

http://alfa.ddns.net:1084/images/DD.JPG

The compound return of the EOD GDX is 9.75% when putting one's entire account unleveraged into each trade. Of course in real life that's not how it works because taking that much risk on each trade carries a huge risk of ruin with it and can be called gambling at best.

Let's say you take a reasonable 1% account risk per trade. Since the model made 2.90R in total you would have made 2.90%. Combined with the IWM model's -5.25 R at 1% account risk per trade that would translate into a combined loss of 2.35%.

I realise I'm stating the obvious here but this is kind of disappointing. Especially since I like the concept and believe it can work. Another thing to note is the small sample size of trades we have at the moment which might not be a true representation of the robot's capabilities in all kinds of different market regimes. That being said I can't say the robots are a viable investment for me at this stage. I would like to see an improvement in the models KPI's first.

To end on a positive note let's have a look at the GDX RT results ...

RT (strong signals only) shows some promise with better results compared to the EOD model. RT with all signals performs about on par to EOD.

http://alfa.ddns.net:1084/images/GDX_RT_SS.JPG

http://alfa.ddns.net:1084/images/GDX_RT_AS.JPG

mklein9
05-16-2012, 03:07 PM
I couldn't agree with you more, Rembert. Great work and well stated. I also have a strong belief in the underlying concept of EV, but have had issues with the implementation recently. It seems to me that the effort to add complexity in the layers on top of EV should be spent instead on increasing the signal-to-noise ratio of the EV signal itself, so that those added layers become unnecessary. The primary issue I believe the development team is dealing with is absolutely fundamental: insufficient signal, insufficient samples to provide reliable indications, judged by statistical measures of robustness. Maybe the RT approach provides some of that, but like some others here I cannot take advantage of RT.

-Mike

asomani
05-16-2012, 04:38 PM
Thank you very much, Rembert. The information you provided is quite helpful. I'm always keen to read your thoughts, as well.

Pascal
05-16-2012, 05:28 PM
Let's have a look at the GDX robot. For an explanation of some terms used here I refer to my IWM robort analysis on the previous page. For some EOD trades I don't have the initial stop data so I replaced those by an average stop as calculated from PdP's Excel file. Regarding the real time models I did the same but used the average for all trades. This means the results will not be entirely accurate so take them with a grain of salt. I think the general picture will not be too far off regardless.

Let's start with the EOD model ...

http://alfa.ddns.net:1084/images/GDX_EOD.JPG

Overall the numbers are an improvement compared to IWM. Based on the trade history GDX seems to have a positive (albeit small) expectancy and a low standard deviation resulting into a SQN number of about 1.65 (see previous post for SQN interpretation table).

The GDX model rules ensure that a position is often closed before the full R1 stop is hit or a high R multiple of profit is reached. This translates into a low standard deviation helping the model be more consistent. Tough the large initial stops makes it hard to get a high R profit resulting in a lower expectancy.

What about position sizing ? Well that's up to each individual. As a guideline one can have a look at these tables using the SQN number. Keep in mind these drawdowns are for the entire account and on top of any other mech/disc trading you might be doing.

http://alfa.ddns.net:1084/images/DD.JPG

The compound return of the EOD GDX is 9.75% when putting one's entire account unleveraged into each trade. Of course in real life that's not how it works because taking that much risk on each trade carries a huge risk of ruin with it and can be called gambling at best.

Let's say you take a reasonable 1% account risk per trade. Since the model made 2.90R in total you would have made 2.90%. Combined with the IWM model's -5.25 R at 1% account risk per trade that would translate into a combined loss of 2.35%.

I realise I'm stating the obvious here but this is kind of disappointing. Especially since I like the concept and believe it can work. Another thing to note is the small sample size of trades we have at the moment which might not be a true representation of the robot's capabilities in all kinds of different market regimes. That being said I can't say the robots are a viable investment for me at this stage. I would like to see an improvement in the models KPI's first.

To end on a positive note let's have a look at the GDX RT results ...

RT (strong signals only) shows some promise with better results compared to the EOD model. RT with all signals performs about on par to EOD.

http://alfa.ddns.net:1084/images/GDX_RT_SS.JPG

http://alfa.ddns.net:1084/images/GDX_RT_AS.JPG

Thank you Rembert. This is a great work.



Pascal

Rembert
05-17-2012, 01:57 AM
Nice to hear you guys liked my post. Regarding RT signals. Like some members have mentioned ... for many (including me) it's near impossible to execute trades in realtime. I don't know if it would hurt performance but I wonder if a possible compromise could be to send an e-mail alert like 20 mins before the close with the RT status at that moment. Members can then put a market on close order in with their broker if action is needed. This would also eliminate many potential intraday whipsaws by the RT system.

Pascal
05-17-2012, 05:13 AM
I believe that it is important to see how the model operates in today's environment.
Below is a MF figure for the past 500 days. Please focus on the distance between signals. You can see that on the right of the Figure, we have had many signals issued: the model is whipsawing. This tells us that the model does not react well in this environment when cheaper PM miners attract more money because of a "safe heaven" feeling of gold related investments, but then goes down fast the next day when gold reacts negatively to us US$ bounce.

14263

I believe that the model will work again in a more appropriate environment. However, today's environment is what we are faced with (Tuning the model to use longer time frame will fix the most recent whipsaws, but will destroy past performance.)

Turning to the RT model, we can see even more whipsaws. The model avoids us being on the wrong side of the trade, but the many whipsaws render an execution almost impossible, even for me. We also all know that whipsaws can kill a portfolio, especially when one uses leverage.

14264

When we look at the average returns of each normal "Buy" trades from the GDX MF, we can see that for the first days of trading, the drawdown is important, while the return is almost nil. The figure shown below tells to "give room to the trade." However, the RT model works on the opposite of that principle: it says "change the position because there is a MF directional change". Since December 22, the EOD strategy produced a loss of 4%, while the RT strategy produced a gain a little higher than 9%, but with 2.5 times more trades, whose execution itself might be an issue.

14266

When we now look at the Bought Oversold signals, it is clear that these are better signals, with quicker payback and lower DD. However, we did not have such a signal since Mid of last year. A pretty safe strategy, but hardly a strategy that meets the need of an "active trader" like me.

14262

This is the reason why I am sticking to my strategy explained earlier: at each normal buy signal, I buy a 2014 leap using 10% of a normal position. This allows to "give room to the trade," while limiting the risk to a small position. The danger is that if the market continues to whipsaw, I would be forced to buy new leaps at every (cheaper) buy signal and end up building a losing position which could stay against me if the sector continues to be negative for another 18 months, which I doubt it will.

Yesterday, while I was not at my desk, the RT system issued a buy signal. Maybe time for another leap, if I can it cheap today.

14265

As a last word: you might have noted that most XLX models produce better trades when the signal is "Buy Oversold".
One strategy could be to Buy one position on a buy oversold signal of an XLX ETF and buy two positions when this signal is in the same direction as the 20DMF, but trade short only when the 20DMF issues a short signal.

For such a strategy, an E-mail alert system in RT seems appropriate, with a confirming e-mail 20 minutes before the close.

Anyway, something to think about.


Pascal