Racket wrote:But I still think you're using the wrong tools for the job here.
I know you do. In time, as you learn more, you may come around. Or not; your choice, no skin off my back.
I'm going to go on about probability again. Probability is a measure of uncertainty. When probability is 1, we are 100% certain the event will occur (under whatever conditions we are assuming). When the probability is 0, we are 100% certain the event will not occur.
When probability is 50%, we are at our least confident state: the event will occur, or not, with equal probability. We have 0% certainty of either outcome (will occur / will not occur). The best we can do is guess, and have equal chance of being right or wrong.
With "predictions," "projections" or "forecasts" of asset / market values in the future, there is no objective way to establish probabilities that all experts (or even a good small group of experts) can agree on. Probability is in the eye of the beholder (investor). Igy assesses probabilities differently than I do, and I do differently than agip, SP, mas or Racket, and so on....
None of us is either objectively "wrong" or "right" in assessing probabilities of future market behaviour. The end result may align with one of our expectations, but that has no bearing on the truth of our probability estimate.
If five of us regulars each assess the probability of ten coin flips all landing as heads, and we each assign different values, and I picked 100% probability and then ten heads are flipped, the fact that this rare outcome occurred in no way validates my initial estimate, even though it aligns with the outcome. With flipping coins and casino games, we can make objective estimates of probability that are expected to play out, on average, in the longer term.
When trying to "predict" or assign probabilities to future market behaviour, we don't have access to the same basis that we do for flipping coins. Various estimates among our group may seem relatively dumber or smarter to the others, but there is no objective basis to measure their validity. And again, the absence of an outcome that aligns with expectations doesn't invalidate the probabilistic estimate.
The only way to demonstrate predictive power is to show that it can be repeated a lot, or getting the "right answer" close to expectations frequently over a series of trials.
My "projections" from two years ago seem to have been broadly validated by experience, but it was necessary to set wide confidence bounds to manage that.