+ Reply to Thread
Results 1 to 10 of 108

Thread: Model discussion

Hybrid View

  1. #1
    Quote Originally Posted by senco View Post
    - Pascal, could you please clarify: Was it five stocks per sector, or total of five from all sectors? Was it buying when a 20DMF signal is issued, and holding until next short, or something else?
    In synch with the 20DMF means that you buy when the 20DMF issues a buy signal and keep the position until the 20DMF signal change. The selection process is to first select the five weakest sectors in terms of 20D price RS, take all the stocks in these five sectors and sort them by AB/LER (Take the five closest to LB that show a strong accumulation pattern in terms of LER.) The idea here is to ride the shorts covering phase.

    On the short side, the selection was also to short the weakest sectors in terms of price RS. And in these sectors, select the stocks whose AB is closest to UB with the weakest LER. This is also basically because shorts will go after the weakest stocks first that have bounced to their resistance level.

    This is very different from a CANSLIM approach on stocks selection, but the "weakest sectors" selection process is rather risky, because it all depends on the MDM. If the MDM is wrong, you will be wrong footed in a big way. Also, I believe that after the initial shorts covering phase of the first 5 to 10 days after a buy signal, the advantage of targeting the weakest sector might disappear. Therefore, execution timing is rather important, which I do not believe is a strong point of a human trader. A human trader will enter slowly, consider risk/profits, etc. A large fund will be even more prudent I believe. However, the market is now mostly traded by machines. These trade momentum, volatility and liquidity machines tend to forget fundamentals (especially for the past few years.)


    Quote Originally Posted by senco View Post
    -

    - Could you please clarify the specific selection criteria: For longs, is it weak RS only, or you look at money flow as well? The timeframe for RS - is it 20 days? For shorts, what is the definition of 'overbought' in this context? ... I am trying to understand how EV is used here, and whether we are looking at simple mean reversion at the sector level.

    I have encountered in the past added value for mean reversion of individual stocks within a strong sector, and for longer timeframe sector momentum; this seems to be quite different and intriguing.
    The Overbought selection criterium was used only in the context of trading sectors not in synch with the market.
    Hence I showed that buying every 20 days the weakest sectors was a strategy that produced good results in a strong uptrend (2009) and selling the most overbought sectors every 20 days was a strategy that worked well in a continuously down market (2008). Both strategies worked miserably in 2010 and 2011. This means that we need to work in synch with the market.

    This is also the reason why the stock filters must be used also in synch with the market direction.



    Pascal

  2. #2
    Pascal,

    Does the RT 20DMF signal information go back to 2007, i.e., inception? If so, how many instances were there of the 20DMF exceeding the -70 mark intra-day but closing above it?

  3. #3
    Quote Originally Posted by adam ali View Post
    Pascal,

    Does the RT 20DMF signal information go back to 2007, i.e., inception? If so, how many instances were there of the 20DMF exceeding the -70 mark intra-day but closing above it?
    I do not know, because the lower panel is an EOD calculation process.
    I do not have intraday calculations for that indicator.


    Pascal

  4. #4

    fuzzy logic

    As far as I know fuzzy logic is well suited for tackling threshold problems, but I don't know how go any further than this.
    http://en.wikipedia.org/wiki/Fuzzy_logic

  5. #5
    A few ideas I'll throw out there to encourage more brainstorming (I apologize if they are impractical or don't seem reasonable, but, they are at least worth thinking about - I hope):

    -Instead of taking -70 as the 20DMF oversold level, normalize the 20DMF historical values. In other words, take all the 20DMF historical values and split them into 100 buckets or percentiles. This can be done in Excel using the PercentRank function. Then, test what percentile of 20DMF values works best as an oversold level (could be the 15th percentile of values arranged in descending order, for example). This way, you're not looking at 20DMF values on an absolute basis, but, instead on a normalized / relative basis - which self-adapts to the market as more data is collected and the computer puts the data into the buckets.

    -Consider employing regime-switching. In other words, categorize the market environment into one of six categories, for example:
    -low volatility uptrend
    -high volatility uptrend
    -low volatility downtrend
    -high volatility downtrend
    -low volatility sideways or not trending
    -high volatility sideways or not trending

    Detecting when the market is in one of the above environments is the hard part, but, this can be thought about. For example, to assess the volatility profile of the market, one could use a PercentRank of 21-day historical volatility values for the past year (based on IWM or SPY, for instance). To assess whether the market is trending, one could use something like the ADX indicator or Trend Strength Index (search online for the latter). And so on...

    Once you're able to detect the regime of the market effectively, determine which settings for the 20DMF and Robot are best in each regime - so as to optimize risk-adjusted returns within each regime. An obvious problem here will be the lack of a historical data set to work with for each regime (and in general), as I think 20DMF values have only been available since sometime in 2007 (?).

    -A nice complementary or potentially confirming indicator to the 20DMF in terms of detecting oversold levels may be the S&P Oscillator. I keep track of the S&P Oscillator for the S&P 500 via MarketTells and have found, at least in my experience, that a reading of -6.5 or below (yes, this is an absolute rather than normalized level...I know) normally coincides with an oversold condition in the 20DMF. The last -6.5 or below reading was on Dec. 19, 2011, when the Oscillator just barely triggered oversold by hitting -6.6 (I believe this was the oversold period that the 20DMF just barely missed seeing as "oversold"). A spreadsheet of historical values can be downloaded from the MarketTells website should you wish to look into this further. Also, the S&P Oscillator can be calculated for other indices (NYSE common-stock-only, Nasdaq Composite, etc....even Russell 2000, providing one has the requisite advance/decline and up/down volume data for that index). MarketTells has it calculated for the S&P 500 and NYSE-common stock-only, I believe. I'm most comfortable using the S&P 500 version, as I find the -6.5 threshold on it particularly useful for detecting oversold conditions.

    -Consider keeping track of POMO operations or using Bob's liquidity indicator, so that the 20DMF and/or Robot is able to get an idea if there is a Fed-supported put underneath the market, and thereby perhaps modify how it operates (it may operate more conservatively on the short side and more aggressively on the long side when liquidity is thought to be more than ample, for example). I know this idea has already perhaps been suggested, along with incorporating the $TICK indicator into the Robot somehow. But, I'm repeating it here nonetheless.

    -Consider incorporating seasonality into the 20DMF model and/or Robot. I know seasonality is not thought to be a strong indicator, but, it has stood the test of time in some cases - like the end-of-month / beginning-of-month window dressing (last 4 trading days of month ending and first 2-3 days of month beginning) along with the Oct-Apr or Nov-Apr seasonally strong period - for example. An oversold condition that occurs in the early part of the window dressing period or right before the window dressing period often turns out to be a good at least short-term buying opportunity, for instance. Meanwhile, the biggest drops in the market tend to happen between May-Oct/Nov, I believe. Selloffs that start during these months should typically be taken more seriously than selloffs that start in the remainder of the year.

    -One would think that the top and bottom of the month can happen at anytime in the month, close to equally. But, I think Michael Stokes at MarketSci did some research showing that the top or bottom of the month happens in the first 7 trading days of the month about 80% of the time. Perhaps this fact (although it needs to be confirmed) would be useful to keep in mind in programming the 20DMF and/or Robot. Maybe there is some good way to take advantage of it.

    There is lots more that could be said, but, I must stop here due to time constraints. I hope many more will join in sincerely contributing to this thread.
    Last edited by asomani; 02-09-2012 at 02:04 AM. Reason: spelling

  6. #6
    Quote Originally Posted by asomani View Post
    -Instead of taking -70 as the 20DMF oversold level, normalize the 20DMF historical values. In other words, take all the 20DMF historical values and split them into 100 buckets or percentiles. This can be done in Excel using the PercentRank function. Then, test what percentile of 20DMF values works best as an oversold level (could be the 15th percentile of values arranged in descending order, for example). This way, you're not looking at 20DMF values on an absolute basis, but, instead on a normalized / relative basis - which self-adapts to the market as more data is collected and the computer puts the data into the buckets.
    The lower panel of the 20DMF is normalized as of now between -100 and +100.
    Your suggestion is however something that I already use to evaluate the OB/OS levels of the GDX_MF. One issue that I have though is that during 2008/2009, the OB/OS levels were much more extended than the OB/OS levels of 2010/2011. Something to study related to the evolution of long-term volatility.


    Quote Originally Posted by asomani View Post


    -Consider employing regime-switching. In other words, categorize the market environment into one of six categories, for example:
    -low volatility uptrend
    -high volatility uptrend
    -low volatility downtrend
    -high volatility downtrend
    -low volatility sideways or not trending
    -high volatility sideways or not trending

    Detecting when the market is in one of the above environments is the hard part, but, this can be thought about. For example, to assess the volatility profile of the market, one could use a PercentRank of 21-day historical volatility values for the past year (based on IWM or SPY, for instance). To assess whether the market is trending, one could use something like the ADX indicator or Trend Strength Index (search online for the latter). And so on...

    Once you're able to detect the regime of the market effectively, determine which settings for the 20DMF and Robot are best in each regime - so as to optimize risk-adjusted returns within each regime. An obvious problem here will be the lack of a historical data set to work with for each regime (and in general), as I think 20DMF values have only been available since sometime in 2007 (?).
    I do have 20DMF data prior to 2007, but the uptick rules for shorts was abolished in July 2007 I believe. before that date, MF data was skewed to the long side.


    Quote Originally Posted by asomani View Post


    -A nice complementary or potentially confirming indicator to the 20DMF in terms of detecting oversold levels may be the S&P Oscillator. I keep track of the S&P Oscillator for the S&P 500 via MarketTells and have found, at least in my experience, that a reading of -6.5 or below (yes, this is an absolute rather than normalized level...I know) normally coincides with an oversold condition in the 20DMF. The last -6.5 or below reading was on Dec. 19, 2011, when the Oscillator just barely triggered oversold by hitting -6.6 (I believe this was the oversold period that the 20DMF just barely missed seeing as "oversold"). A spreadsheet of historical values can be downloaded from the MarketTells website should you wish to look into this further. Also, the S&P Oscillator can be calculated for other indices (NYSE common-stock-only, Nasdaq Composite, etc....even Russell 2000, providing one has the requisite advance/decline and up/down volume data for that index). MarketTells has it calculated for the S&P 500 and NYSE-common stock-only, I believe. I'm most comfortable using the S&P 500 version, as I find the -6.5 threshold on it particularly useful for detecting oversold conditions.

    -Consider keeping track of POMO operations or using Bob's liquidity indicator, so that the 20DMF and/or Robot is able to get an idea if there is a Fed-supported put underneath the market, and thereby perhaps modify how it operates (it may operate more conservatively on the short side and more aggressively on the long side when liquidity is thought to be more than ample, for example). I know this idea has already perhaps been suggested, along with incorporating the $TICK indicator into the Robot somehow. But, I'm repeating it here nonetheless.

    -Consider incorporating seasonality into the 20DMF model and/or Robot. I know seasonality is not thought to be a strong indicator, but, it has stood the test of time in some cases - like the end-of-month / beginning-of-month window dressing (last 4 trading days of month ending and first 2-3 days of month beginning) along with the Oct-Apr or Nov-Apr seasonally strong period - for example. An oversold condition that occurs in the early part of the window dressing period or right before the window dressing period often turns out to be a good at least short-term buying opportunity, for instance. Meanwhile, the biggest drops in the market tend to happen between May-Oct/Nov, I believe. Selloffs that start during these months should typically be taken more seriously than selloffs that start in the remainder of the year.

    -One would think that the top and bottom of the month can happen at anytime in the month, close to equally. But, I think Michael Stokes at MarketSci did some research showing that the top or bottom of the month happens in the first 7 trading days of the month about 80% of the time. Perhaps this fact (although it needs to be confirmed) would be useful to keep in mind in programming the 20DMF and/or Robot. Maybe there is some good way to take advantage of it.

    There is lots more that could be said, but, I must stop here due to time constraints. I hope many more will join in sincerely contributing to this thread.
    Thank you for all these ideas. I am now working on the ETFs linked MF for deployment still within February with their related RT figures.

    Once this is done, I'll come back to reworking the 20DMF.


    Pascal

  7. #7
    Join Date
    Oct 2011
    Location
    Brugge-Belgium
    Posts
    394
    Following the discussion in this thread I have the feeling that I personally would like to follow the track of selecting a number of stocks from the best sectors that show the best AB/LER combination.

    I would like to backtest that idea.

    I only have the EV data since I joined this forum (October2011).
    For my back test to be a bit reliable I would like to get more data then these 4 months.

    Is it possible to obtain a file with all the stocks of the PascalA_List with “AB Buy signal”, “Extension Tot EV”,“LER” & “Rating” and this for the longest period possible?

    PdP

  8. #8
    Join Date
    Dec 1969
    Location
    Kalmthout, Belgium
    Posts
    35
    Quote Originally Posted by Adriano View Post
    As far as I know fuzzy logic is well suited for tackling threshold problems, but I don't know how go any further than this.
    http://en.wikipedia.org/wiki/Fuzzy_logic
    Regarding Fuzzy Logic ... instead of passing a long/short/neutral signal to the robot the 20DMF could perhaps pass a parameter ranging from -100 to 100 with 0 being the most neutral. The robot could then use this parameter in it's decision making process. How that parameter is calculated and how the robot would use it is another matter of course.

  9. #9
    Quote Originally Posted by Rembert View Post
    Regarding Fuzzy Logic ... instead of passing a long/short/neutral signal to the robot the 20DMF could perhaps pass a parameter ranging from -100 to 100 with 0 being the most neutral. The robot could then use this parameter in it's decision making process. How that parameter is calculated and how the robot would use it is another matter of course.
    This is what I was talking about:
    http://www.lotsofessays.com/viewpaper/1690480.html

    Googling "fuzzy logic and stock market" gives lots of results. Just an idea anyway.

  10. #10
    Join Date
    Dec 1969
    Location
    Vienna, Virginia
    Posts
    603
    Quote Originally Posted by Adriano View Post
    This is what I was talking about:
    http://www.lotsofessays.com/viewpaper/1690480.html

    Googling "fuzzy logic and stock market" gives lots of results. Just an idea anyway.
    I have the MATLAB fuzzy logic toolbox and have played with fuzzy math off and on for many years. I'm no expert, but I understand the fundamentals enough to poke around.

    The greatest challenge for me is backtesting a fuzzy system. I find it difficult to create a test harness (e.g., known stimulus as the input with predictable output). Without this, I have little confidence in what is considered "normal" behavior versus what is considered outside the normal distribution. While I think fuzzy logic can have a place, especially the "porosity" factors that we employ here, I've never been able to build a winning system based on fuzzy math alone.

    In discussion of this with a math guru who uses fuzzy systems with control systems, if we view our trading system as a self-contained entity, we have to have some confidence that the manipulations that we do on the data result in a stable system, e.g., one that won't take our equity to the ground (drawdown) with an expectation that we'll achieve higher gains. These constraints are valid, but they steer the system towards standard Euclidean logic and away from the fuzziness that we're intending.

    The correct answer is probably somewhere in the middle of both models, but again, without a robust way of testing, it's hard to make the jump with real monies.

    As an aside, GGT handles this situation in a different manner. While I maximize on equity to derive a set of coefficients that describe optimal moving averages and rates of change, I "lop off" the top of the equity mountain and try to maximize the area of the plateau where the outside conditions (market variables) do not dramatically change the optimal solution.

    Think of it this way ... you have two variables, EMA1 and EMA2. For a given stock price series over the past 2 years, there is a unique combination of EMA1 and EMA2 values which maximize the equity of that system. We could pick EMA1 and EMA2 and use those values, but if the market moves just a tad against us, we could see our equity drop off FAST. This situation would exist if there was a gradual slope in the equity curve as EMA1 was held constant and EMA2 was varied to produce the maximum. If EMA2 goes too far, we could see a "drop off the equity cliff". This sensitivity is very dangerous to our portfolio, and it is why most systems do not work well with crossing MAs.

    Instead, ask yourself how much of the mountain top can you "lop off" flat so that a marble rolling around on this new plateau does not "fall off". Of course, you could "lop off" everything until the marble is on flat ground with everything around -- it will never "fall off" the plateau, but then again, you're not making money. But you could "lop off" enough of the mountain to keep you on a higher plateau than any surrounding peak -- and now you're more stable to market conditions if the "optimal" EMA1 and EMA2 are adjusted to the geometric center of this plateau.

    This is more or less what GGT attempts to do, and perhaps there is a lesson here for the model here. Not all stocks/ETFs in the GGT system have a solution that is robust -- this is what the metrics on my sheet tell me, but for many, they behave very well.

    The GGT coefficients are updating 24/7, and every week about 15%-20% of the stock database receives updated numbers (sometimes they change, sometimes they do not), and about 25% of the ETFs get new values. This keeps the backtest data window sliding forward ever week on a new basket of stocks, so that the optimization does not get too far from reality.

    Food for thought ...

    Regards,

    pgd

+ Reply to Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts