# Criticism of the Standard Deviation

Standard Deviation is widely used to measure risk in assets. I think a higher standard deviation is considered a risky bet, with greater upside and downside potential.

However, if the asset A's returns are: 50, 50, 50, 400 and Asset B's return are 50, 50, 50, 50 -- the latter will have much less standard deviation and risk (in fact zero) while the former (Asset A) will have a much higher standard deviation and 'be full of risk'. But we can clearly see that Asset A's downside potential is a lot less that Asset B's, and it is a safer, or less risky investment than Asset B.
These are a few data points so we can intuitively see which bucket has more risk, but what happens if such a scenario is happening over 100s, 1000s of data points?

In other words, the Standard Deviation is subtracting the points above the mean from the mean and lower than the mean from the mean and after squaring and adding the differences Calculating risk. However, a point below the mean may imply lesser risk as a point higher than the mean might imply a higher risk, however adding these points after subtracting from the mean and then squaring them would be a great distortion, that is, the volatility would jump through the roof. In other words, points that should reduce risk (by being very low in the supposed instance) exacerbate risk as the differences between the mean and the point are accentuated and thereafter squared.

Since the differences between the data and the mean are squared, negative values are squared to produce results which are positive (it becomes additive to the values higher than the mean that are also squared and then added). And, if the points have a larger Deviation from the mean (downside),squaring them mean further amplifying and distorting the results as these differences become greater when squared.

Think of a set of Betas ranging from low to high. In this example lower Betas Represent lower risk and higher Betas represent more risk. When we calculate the SD of the Betas, the points below the mean (the safer ones) are actually causing the Standard Deviation to increase and the lesser the beta, the more pronounced the effect or risky it becomes when subtracted from the mean and squared. Thus, lower betas are increasing the risk, the lower they are from the mean, the more the risk.

If the beta paradigm is incorrect, then, by assumption, SD is incorrect. If it is correct, then still the SD is incorrect.

In other words, if lower numbers are desirable in a data-set, SD wouldn't provide correct results. It would increase the variation because they are lower, and wouldn't assess the risk properly

I realized that it is accepted that consistency cannot be measured by the Standard Deviation alone. However, I have written a strategy, that is, make scores above the average (among equals) the average. In this way consistency may be measured (depending upon the situation).

The point that I am making is that -- and where SD maybe incorrect in assessing risk or otherwise -- when lower numbers are desirable in a data-set, and we use the Standard Deviation, it actually accentuates the results as lower numbers are subtracted from the mean, and then squared. Thus, the lower the number, the more it will cause the Standard Deviation to spike. The resulting number (the Standard Deviation) would be larger because of the lower number. But then the lower numbers would make the Investment opportunity (or whatever) more undesirable. This is the crux of my criticism. Some of the methodology used to calculate the Standard Deviation of Betas may be incorrect (which is written earlier), but yet it validates the point that when lower numbers are desirable, the Standard Deviation makes the whole situation more undesirable.

The data is more sensitive to outliers when lower numbers of the SD are desirable. It would increase the mean. Lower points will have a larger difference (X-M), and increase the SD.

Last edited by a moderator:

#### JeffM1

Standard Deviation is widely used to measure risk in assets. I think a higher standard deviation is considered a risky bet, with greater upside and downside potential.

However, if the asset A's returns are: 50, 50, 50, 400 and Asset B's return are 50, 50, 50, 50 -- the latter will have much less standard deviation and risk (in fact zero) while the former (Asset A) will have a much higher standard deviation and 'be full of risk'. But we can clearly see that Asset A's downside potential is a lot less that Asset B's, and it is a safer, or less risky investment than Asset B.
These are a few data points so we can intuitively see which bucket has more risk, but what happens if such a scenario is happening over 100s, 1000s of data points?

In other words, the Standard Deviation is subtracting the points above the mean from the mean and lower than the mean from the mean and after squaring and adding the differences Calculating risk. However, a point below the mean may imply lesser risk as a point higher than the mean might imply a higher risk, however adding these points after subtracting from the mean and then squaring them would be a great distortion, that is, the volatility would jump through the roof. In other words, points that should reduce risk (by being very low in the supposed instance) exacerbate risk as the differences between the mean and the point are accentuated and thereafter squared.

Since the differences between the data and the mean are squared, negative values are squared to produce results which are positive (it becomes additive to the values higher than the mean that are also squared and then added). And, if the points have a larger Deviation from the mean (downside),squaring them mean further amplifying and distorting the results as these differences become greater when squared.

Think of a set of Betas ranging from low to high. In this example lower Betas Represent lower risk and higher Betas represent more risk. When we calculate the SD of the Betas, the points below the mean (the safer ones) are actually causing the Standard Deviation to increase and the lesser the beta, the more pronounced the effect or risky it becomes when subtracted from the mean and squared. Thus, lower betas are increasing the risk, the lower they are from the mean, the more the risk.

If the beta paradigm is incorrect, then, by assumption, SD is incorrect. If it is correct, then still the SD is incorrect.

In other words, if lower numbers are desirable in a data-set, SD wouldn't provide correct results. It would increase the variation because they are lower, and wouldn't assess the risk properly

I realized that it is accepted that consistency cannot be measured by the Standard Deviation alone. However, I have written a strategy, that is, make scores above the average (among equals) the average. In this way consistency may be measured (depending upon the situation).

The point that I am making is that -- and where SD maybe incorrect in assessing risk or otherwise -- when lower numbers are desirable in a data-set, and we use the Standard Deviation, it actually accentuates the results as lower numbers are subtracted from the mean, and then squared. Thus, the lower the number, the more it will cause the Standard Deviation to spike. The resulting number (the Standard Deviation) would be larger because of the lower number. But then the lower numbers would make the Investment opportunity (or whatever) more undesirable. This is the crux of my criticism. Some of the methodology used to calculate the Standard Deviation of Betas may be incorrect (which is written earlier), but yet it validates the point that when lower numbers are desirable, the Standard Deviation makes the whole situation more undesirable.

The data is more sensitive to outliers when lower numbers of the SD are desirable. It would increase the mean. Lower points will have a larger difference (X-M), and increase the SD.
This simply does not make any sense. Risk of what? Anyone who says that increase in standard deviation results in an increase in risk is speaking very loosely.

Differences in certain kinds of risk are positively correlated with difference in the standard deviation of some variable under the assumption of certain economic behavior and assuming certain other variable are equal. Talking very generally about risk and standard deviation without specifying what risk, what variable is considered to vary, and what other variables are held constant is so vague as to be close to meaningless.

Let's take your example. First note that the two different investments have very different cumulative returns. So the other things being equal does not directly pertain. To adjust for that, we consider the standard deviation on the rate of return, a vital qualification that you ignore.

Assuming that your returns are annual returns that are certain (no default risk), the primary market risk is price risk (also called rate risk).

Suppose the returns are due in 1, 2, 3, and 4 years respectively. And suppose the market's current time preference for certain money is 10% a year.

Asset A has a mean return of 130 per year with a relative standard deviation of about 120%. Asset B has a mean return of 40 per year with a relative standard deviation of 0%.

Asset A has more price risk than Asset B. Why?

The current market value of asset A is approximately 372.6795.

The current market value of asset B is approximately 126.7946.

Now suppose that interest rates jump to 11% the day after purchase.

Then the market value of value of asset A will drop to approximately 361.2410, a loss of 11.4385. The market value of asset B will drop to approximately 124.0978, a loss of 2.6968. We lost a lot more on asset A.

But wait a minute. We originally invested more in asset A than in asset B. Yes. (That was a defect in your example.) But let's look at the relative losses. Asset A has a loss relative to initial investment of about 3.07% whereas asset B has a loss relative to initial investment of 2.13%.

So we have shown the correlation between difference in relative standard deviation of return over time and rate risk.

You will need to get much more exact in your understanding of what terms mean.

EDIT: I do not think you understand what beta means. The minimal risk position is a beta of 1. The further away from 1 is beta, whether higher or lower, the greater the risk that beta measures.

EDIT 2: Notice that the relationship between standard deviation of returns over time and price is not linear. No standard deviation does not result in no risk.

Last edited:

The methodology is incorrect. The crux of the argument is in the last part.

#### JeffM1

The methodology is incorrect. The crux of the argument is in the last part.
Whether or not you think the methodolgy is correct (and I do not believe that it is incorrect) is completely irrelevant to how the market behaves.

To be brutally honest, you do not seem to have the slightest idea why people say that risk is positively correlated with variance. It is very hard to persuade people that you are correct in disputing some 60 or 70 years of academic studies and practical experience when you do not seem to understand what those studies and experience hold.

Market value is negatively correlated with variance holding other things equal.

The crux of my criticism is in the last part.

#### JeffM1

The crux of my criticism is in the last part.
Repeating yourself does not make your argument any more intelligible. Lower returns do not, ceteris paribus, make an investment more valuable.

Your example was very poorly selected. It involved apparently certain future returns, not probabilities of future returns, which is the more usual situation when considering investments. The two investments did not have equal cumulative cash flows. No mention was made of value, which means that the example as presented is irrelevant to economics. Notice that when I discussed your example, I did so in terms of prices.

Risk in economics means that we are dealing with a probability distribution. It is distinguished from both certainty and uncertainty because the latter term indicates that our lack of knowledge is so great that no reasonable estimate of probabilities is possible. The risk of an action is defined as the variance among the quantifiable results arising from the action, which means that relative risk must be positively correlated with relative standard deviation. You can of course create your own definitions, but then you are simply not talking about the same thing as anyone else.

Consider investment A. In one year, it will return 27, 30, or 33, each with probability 1/3. It has a future expected value of 30 with variance of 6.

Consider investment B. In one year, it will return 0, 30, or 60, each with a probability of 1/3. It has a future expected value of 30 with variance of 600.

Will the current price of Investment A be equal to, higher than, or lower than the current price of Investment B?

An economic assumption about financial markets is that they are almost invariably risk averse. If that assumption is true and financial markets are behaving normally, then Investment A will have a higher current price than Investment B.

In other words, risk is a defined term. There is a rebuttable assumption that, in actual fact, financial markets are risk averse. Arguing that an assumption about empirical reality is false is perfectly valid if you can present empirical evidence. But you cannot dispute assumptions about empirical facts with vague assertions or using words in different senses from their generally accepted meanings.

By the way, there is plenty of empirical evidence that many individuals are not invariably risk averse. To show that such individuals normally dominate financial markets is a completely different evidentiary task.

To sum up, risk in economics is a technical term, not a theory or a methodology. What you seem to be saying is that the assumption that financial markets place a higher relative value on investments with less relative risk is empirically wrong. Before you can look for empirical evidence for that assertion, I suggest that you try to specify a hypothetical investment the value of which is increased by lower returns. That is, after all, what you are explicitly arguing: the computation of standard deviation does not recognize that lower returns are more valuable than higher returns.

That proposition is close to being idiotic: an expectation of lower returns does not enhance the market value of an investment. The economic proposition that you may have some very remote possibility of disproving is this:

Of two investments of equal expected value, the one with greater risk (as traditionally defined) will seldom if ever have a higher market value than the one with lesser risk and will usually have a lower market value.

Good luck.

Last edited:

I told youâ€” the methodology (mine) is incorrect. But, the crux of the argument is in the last part.

#### JeffM1

I give up. You simply repeat yourself over and over.

It could be a misconception on my part. You may be correct as far as Finance goes. But, if you can explain this. It should clarify things for me.

Here is the crux of the criticism of the SD:

The point that I am making is that -- and where SD maybe incorrect in assessing risk or otherwise -- when lower numbers are desirable in a data-set, and we use the Standard Deviation, it actually accentuates the results as lower numbers are subtracted from the mean, and then squared. Thus, the lower the number, the more it will cause the Standard Deviation to spike. The resulting number (the Standard Deviation) would be larger because of the lower number. But then the lower numbers would make the Investment opportunity (or whatever) more undesirable. This is the crux of my criticism. Some of the methodology used to calculate the Standard Deviation of Betas may be incorrect (which is written earlier), but yet it validates the point that when lower numbers are desirable, the Standard Deviation makes the whole situation more undesirable.

The data is more sensitive to outliers when lower numbers of the SD are desirable. It would increase the mean. Lower points will have a larger difference (X-M), and increase the SD.

The same is the case when higher numbers of the SD are desirable.

Last edited: