Pascal, thanks for the explanation.
My primary concern is the rule derivation methodology. In other words, how an anecdotal observation is validated to be statistically significant for inclusion in the robot's rule set.
From a parametrization standpoint, whatever ends up being selected has to: (a) be based on some apriori rationale, (b) coherently fit into the model and (c) have minimal dimensionality with respect to the available data on which it is intended to be validated. Clustering/segmentation of the LT/ST signals (e.g. weak/strong) seems at first glance to be a reasonable choice for satisfying (a)-(c) above.
Repeat analysis of the signals' distributions may be necessary every so often to re-align the model with changing dynamics of the market (POMO or otherwise). Another option is to utilize "range-adaptive" cluster boundaries that are automatically re-calculated based on a recent history (e.g. past 12 or 24 months) to compensate for the inevitable market drift. The latter often feels more "versatile" than relying on hard-coded thresholds.
Trader D