Standard Deviation is widely used to measure risk in assets. I think a higher standard deviation is considered a risky bet, with greater upside and downside potential.

However, if the asset A's returns are: 50, 50, 50, 400 and Asset B's return are 50, 50, 50, 50 -- the latter will have much less standard deviation and risk (in fact zero) while the former (Asset A) will have a much higher standard deviation and 'be full of risk'. But we can clearly see that Asset A's downside potential is a lot less that Asset B's, and it is a safer, or less risky investment than Asset B.

These are a few data points so we can intuitively see which bucket has more risk, but what happens if such a scenario is happening over 100s, 1000s of data points?

In other words, the Standard Deviation is subtracting the points above the mean from the mean and lower than the mean from the mean and after squaring and adding the differences Calculating risk. However, a point below the mean may imply lesser risk as a point higher than the mean might imply a higher risk, however adding these points after subtracting from the mean and then squaring them would be a great distortion, that is, the volatility would jump through the roof. In other words, points that should reduce risk (by being very low in the supposed instance) exacerbate risk as the differences between the mean and the point are accentuated and thereafter squared.

Since the differences between the data and the mean are squared, negative values are squared to produce results which are positive (it becomes additive to the values higher than the mean that are also squared and then added). And, if the points have a larger Deviation from the mean (downside),squaring them mean further amplifying and distorting the results as these differences become greater when squared.

Think of a set of Betas ranging from low to high. In this example lower Betas Represent lower risk and higher Betas represent more risk. When we calculate the SD of the Betas, the points below the mean (the safer ones) are actually causing the Standard Deviation to increase and the lesser the beta, the more pronounced the effect or risky it becomes when subtracted from the mean and squared. Thus, lower betas are increasing the risk, the lower they are from the mean, the more the risk.

If the beta paradigm is incorrect, then, by assumption, SD is incorrect. If it is correct, then still the SD is incorrect.

In other words, if lower numbers are desirable in a data-set, SD wouldn't provide correct results. It would increase the variation because they are lower, and wouldn't assess the risk properly

I realized that it is accepted that consistency cannot be measured by the Standard Deviation alone. However, I have written a strategy, that is, make scores above the average (among equals) the average. In this way consistency may be measured (depending upon the situation).

The point that I am making is that -- and where SD maybe incorrect in assessing risk or otherwise -- when lower numbers are desirable in a data-set, and we use the Standard Deviation, it actually accentuates the results as lower numbers are subtracted from the mean, and then squared. Thus, the lower the number, the more it will cause the Standard Deviation to spike. The resulting number (the Standard Deviation) would be larger because of the lower number. But then the lower numbers would make the Investment opportunity (or whatever) more undesirable. This is the crux of my criticism. Some of the methodology used to calculate the Standard Deviation of Betas may be incorrect (which is written earlier), but yet it validates the point that when lower numbers are desirable, the Standard Deviation makes the whole situation more undesirable.

The data is more sensitive to outliers when lower numbers of the SD are desirable. It would increase the mean. Lower points will have a larger difference (X-M), and increase the SD.

However, if the asset A's returns are: 50, 50, 50, 400 and Asset B's return are 50, 50, 50, 50 -- the latter will have much less standard deviation and risk (in fact zero) while the former (Asset A) will have a much higher standard deviation and 'be full of risk'. But we can clearly see that Asset A's downside potential is a lot less that Asset B's, and it is a safer, or less risky investment than Asset B.

These are a few data points so we can intuitively see which bucket has more risk, but what happens if such a scenario is happening over 100s, 1000s of data points?

In other words, the Standard Deviation is subtracting the points above the mean from the mean and lower than the mean from the mean and after squaring and adding the differences Calculating risk. However, a point below the mean may imply lesser risk as a point higher than the mean might imply a higher risk, however adding these points after subtracting from the mean and then squaring them would be a great distortion, that is, the volatility would jump through the roof. In other words, points that should reduce risk (by being very low in the supposed instance) exacerbate risk as the differences between the mean and the point are accentuated and thereafter squared.

Since the differences between the data and the mean are squared, negative values are squared to produce results which are positive (it becomes additive to the values higher than the mean that are also squared and then added). And, if the points have a larger Deviation from the mean (downside),squaring them mean further amplifying and distorting the results as these differences become greater when squared.

Think of a set of Betas ranging from low to high. In this example lower Betas Represent lower risk and higher Betas represent more risk. When we calculate the SD of the Betas, the points below the mean (the safer ones) are actually causing the Standard Deviation to increase and the lesser the beta, the more pronounced the effect or risky it becomes when subtracted from the mean and squared. Thus, lower betas are increasing the risk, the lower they are from the mean, the more the risk.

If the beta paradigm is incorrect, then, by assumption, SD is incorrect. If it is correct, then still the SD is incorrect.

In other words, if lower numbers are desirable in a data-set, SD wouldn't provide correct results. It would increase the variation because they are lower, and wouldn't assess the risk properly

I realized that it is accepted that consistency cannot be measured by the Standard Deviation alone. However, I have written a strategy, that is, make scores above the average (among equals) the average. In this way consistency may be measured (depending upon the situation).

The point that I am making is that -- and where SD maybe incorrect in assessing risk or otherwise -- when lower numbers are desirable in a data-set, and we use the Standard Deviation, it actually accentuates the results as lower numbers are subtracted from the mean, and then squared. Thus, the lower the number, the more it will cause the Standard Deviation to spike. The resulting number (the Standard Deviation) would be larger because of the lower number. But then the lower numbers would make the Investment opportunity (or whatever) more undesirable. This is the crux of my criticism. Some of the methodology used to calculate the Standard Deviation of Betas may be incorrect (which is written earlier), but yet it validates the point that when lower numbers are desirable, the Standard Deviation makes the whole situation more undesirable.

The data is more sensitive to outliers when lower numbers of the SD are desirable. It would increase the mean. Lower points will have a larger difference (X-M), and increase the SD.

Last edited by a moderator: