By
I'm excited about the new database. I can't help it. Try as I might to confine my thoughts to the initial study I described on Friday, other questions keep roaming through my mind.
For example, there is the question of the number one stock. The very existence of the Foolish Four is tied to the fact that The Motley Fool's original Beating the Dow discussion board community on AOL discovered that Michael O'Higgins' Beating the Dow strategy could be "improved" by simply eliminating the first stock on his BTD list. It didn't matter whether you substituted the sixth stock for the first or just dropped it, the returns were better than the comparable four- or five-stock strategy.
There was also the matter of the second stock, the one O'Higgins called the PPP. (PPP stands for Penultimate Profit Prospect. Yeah. Well, I suppose people who name stock strategies things like "The Foolish Four" shouldn't laugh at "PPP.") In Beating the Dow, O'Higgins proposed a single-stock strategy for risk-loving investors based on an unusually high average return for that particular stock. In fact our original Foolish Four was just a modified Beating the Dow strategy with the first stock removed and the second stock doubled.
Both of those things have given the critics fits. There's no doubt about it, they "smell" like data mining. O'Higgins explained the phenomenon of the first stock as "too much of a good thing." High yield and low price may be attractive to investors, but when a stock's price gets beaten down too far there may be a good reason for it. Rather than just being out of favor due to a temporary situation, the company may very well be in serious trouble. The outperformance of the second stock was explained as a reflection of the validity that low price and high yield are important factors. Neither of those arguments are unreasonable, but they aren't very conclusive either.
When we developed our own database to check the O'Higgins data, we found that the second stock did perform impressively and the first did perform dismally. But when you are dealing with random data there will always be one bit that is highest and one that is lowest, unless they are all the same, of course. So it is quite possible that even the extreme differences between these stocks' average returns and the average return of the other high-yielding stocks was simply a random variation. If it was just a random variation, there is no reason to expect that the second-lowest-priced stock would continue to outperform the rest of the BTD stocks.
In 1997, we dropped the idea of doubling the second stock because of that very concern, but it has been harder to dismiss the underperformance of the first stock. When we completed the monthly database that gave us 11 more number one stocks each year (minus the stocks that that stayed in that position for more than one month), the association was considerably weaker. When using the RP formula instead of the "rank by yield/rank by price" (BTD) method, the return for the first stock was right in the ball park with the rest. Yet we still drop it.
You are wondering why.
We still drop it because the number one stock still shows extreme volatility. In other words, its average return is about the same as the other high RP stocks, but that average is composed of more really great stocks and really bad stocks. Avoiding volatility when you can has always struck me as a good idea so we continue to drop it especially since there is no apparent downside to substituting the fifth stock for the first. Still looking at how the highest RP stocks compare to the moderately high should be very interesting.
One of the things that I am nervous about with this new database is that we are losing the "protection" of the Dow. While being listed as a Dow stock has never guaranteed that a company is financially sound, the Dow doesn't add companies to their list if they are in trouble, and if a company stays in trouble for a number of years, it is usually removed. That slight bit of protection won't be present in the new database where all we have to go on is the size of the company.
That's another reason I want to look carefully at the highest RP rankings. If the "too much of a good thing" phenomenon is real, then we may need to remove that top stock (or even several of the top stocks). That's going to give the critics problems -- hey, it gives me problems -- but there has to be some way to screen out companies that are in grave financial trouble, and it has to be a purely mechanical way that can be backtested and applied consistently given the data available. I'm open for suggestions.
There are a lot of other questions that can be explored. Some of the things I want to tackle are:
That last one isn't as crazy as it sounds. One of the problems we have seen demonstrated recently is that of a stock continuing to drop in price for quite a while after it makes the list. Of course there is no guarantee that stock will turn around the moment it makes the list. We know that it often takes a year or two for these companies to get their act back together again. We also know that the historical returns for these stocks tend to be almost as high during the second year as for the first -- not quite high enough to justify a two-year strategy, but close.
Those facts have led a number of people to speculate about whether it would be better to buy stocks only after they have "been on the list" for while. Maybe it would. It's an interesting question.
So many questions, so little time!
Fool on and prosper!