The post Risk-based efficient frontier workbook updated appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>Please note: If you are making investment decisions, consider what you paid for this workbook and treat it as a toy. Its results do not constitute investment advice.

The post Risk-based efficient frontier workbook updated appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post Visualize Portfolio Risk appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>**Example: Unconstrained Maximum Sharpe Ratio portfolio**

Notice the Contributions to Risk are proportionate to the Contributions to Return

**Example: Unconstrained Minimum Variance portfolio**

Notice the Weights and Contributions to Risk are Equal

**Example: Risk Parity portfolio**

Contributions to Risk are equal

**Assumptions**

Asset Class | Volatility | Expected_Return |
---|---|---|

US Stocks | 20% | 8% |

Int'l Stocks | 25% | 8% |

Bonds | 5% | 2% |

REITs | 25% | 7% |

Commodities | 25% | 5% |

Correlation matrix

US Stocks | Int'l Stocks | Bonds | REITs | Commodities | ||
---|---|---|---|---|---|---|

US Stocks | 1.00 | 0.95 | 0.15 | 0.85 | 0.60 | |

Int'l Stocks | 0.95 | 1.00 | 0.30 | 0.80 | 0.70 | |

Bonds | 0.15 | 0.30 | 1.00 | 0.25 | 0.05 | |

REITs | 0.85 | 0.80 | 0.25 | 1.00 | 0.45 | |

Commodities | 0.60 | 0.70 | 0.05 | 0.45 | 1.00 |

**Definitions**

Weight: Asset Value divided by the greater of 1 and the sum of all Asset Values

Value_{i} / max(1, Value_{i} )

Risk Contribution: Product of Asset’s Weight and its covariance with the Portfolio, scaled by the Portfolio’s variance

weight_{i} ∗ _{i,Portfolio} / ^{2}_{Portfolio}

Return Contribution: Product of Asset’s Weight and its Expected Return

weight_{i} ∗ _{i}

Sharpe Ratio: Ratio of Expected Return to Volatility

_{i} / _{i}

Note this is simply a “Return/Risk ratio,” which is not exactly the true definition of a Sharpe Ratio. However in the investment world vernacular the terms are interchangeable and for the purpose of this application their meanings are similar enough.

If you would like to know the precise definition of “Sharpe Ratio,” consult the source.

I created a slightly more elaborate version of this app in the Coursera course, Developing Data Products, the ninth of ten courses I took as part of the Johns Hopkins University Data Science certification.

The post Visualize Portfolio Risk appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post New Shiny app: portfolio risk visualizer appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>Here is a toy model I developed to illustrate the interaction between portfolio weights and contributions to portfolio risk. Access the app here: Visualize Portfolio Risk.

This is a slightly less elaborate version of my project in the Coursera course, Developing Data Products, the ninth of ten courses I took as part of the Johns Hopkins University Data Science certification.

The post New Shiny app: portfolio risk visualizer appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post The Low Hanging Fruit of Low Volatility Backtests appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>**“Look ma, I have skill!”**

An idea that would have been regarded as heresy in the 1990s has gained acceptance and respectability: the idea that investors are not rewarded for risk, systematic or otherwise.

Thanks to the long-term performance dominance of low volatility assets over the past few decades, producing an apparently informative backtest has become easy. Just make sure your ranking process favors lower volatility assets and your strategy has a nice tailwind.

Prior to the Fama/French litany it had been just as easy to find attractive backtest results by favoring Value and Small Caps. (And eventually, Momentum). That’s why regressing against Fama/French factors has become standard in any kind of investment anomaly or strategy analysis hoping to be taken seriously.

While Small Caps performed better than Large Caps in the long run, they have endured cycles of being out of favor for entire decades. Value and Growth have seen pronounced cycles too, and by construction the stocks favored by Momentum today might be quite different from the stocks favored by Momentum a year ago or a year from now.

Further, stocks can conceivably cross over categories. Small Caps can become Large Caps; Value can become Growth, and today’s Momentum stock might not be one in a year.

**Stickiness of Low Volatility Means Low Volatility Strategies Enjoy Low Turnover**

But when you rank stocks by volatility, you’re ranking stocks by something more permanent than Momentum, Capitalization, or Valuation. Stocks ranked today as Volatile are likely to have been ranked as Volatile a year ago, and probably will be in another year (if they’re still around).

The table below displays the rank correlation of the constituents of the SPY ETF as of July 29, 2011, sorted by trailing 1-year standard deviation of daily log returns, to their ranks the following year. The following year they’re sorted both by standard deviation and downside risk.

Year-end | Correlation of this year's Standard Deviation with Next year's | Correlation of this year's Standard Deviation with Next year's Downside Risk |
---|---|---|

2009 | 0.84 | 0.84 |

2008 | 0.88 | 0.89 |

2007 | 0.72 | 0.76 |

2006 | 0.67 | 0.61 |

2005 | 0.87 | 0.85 |

2004 | 0.83 | 0.8 |

2003 | 0.77 | 0.77 |

2002 | 0.86 | 0.87 |

2001 | 0.77 | 0.82 |

2000 | 0.77 | 0.77 |

1999 | 0.83 | 0.78 |

1998 | 0.84 | 0.81 |

1997 | 0.83 | 0.79 |

1996 | 0.84 | 0.82 |

1995 | 0.87 | 0.85 |

1994 | 0.89 | 0.85 |

1993 | 0.86 | 0.79 |

1992 | 0.85 | 0.84 |

1991 | 0.86 | 0.82 |

1990 | 0.86 | 0.82 |

1989 | 0.77 | 0.76 |

1988 | 0.73 | 0.75 |

1987 | 0.71 | 0.82 |

1986 | 0.59 | 0.71 |

1985 | 0.69 | 0.74 |

1984 | 0.84 | 0.79 |

Source:PortfolioWizards

Notice two things: first, today’s volatile stocks tend to be next year’s; second, whether you rank by standard deviation or by downside risk doesn’t seem to make much difference.

While this data set is survivorship-biased, even in data sets that aren’t subject to survivorship bias this pattern remains. Which means if you want to manage a Low Volatility portfolio, chances are it won’t have to have high turnover.

**Lake Woebegone Backtests**

We’ve now endured a few decades when systematic risk has mostly been punished, which is one reason why you’re seeing so many “low volatility” and “minimum volatility” strategies appearing. All these back-tested results are above average! Whether the return to systematic risk turns out to have been a myth remains an open question to many, but when you’re exploring new investment products and strategies, keep in mind that we’ve just lived through an era when almost any strategy with lower risk than cap weighted indices will have outperformed those indices. Look for low volatility bias. Is low volatility an explicit part of the author’s strategy, or was it an unintended consequence that happened to be in favor?

Given the dominance of the superior performance of Low Volatility over the past several decades, superior backtest results you see today are likely to have another feature in addition to exposure to the Fama/French factors: a Low Volatility tilt.

Not that there’s anything wrong with Low Volatility – just make sure that whenever you’re evaluating the performance of a manager, an ETF, or a vendor, you’re aware of how much of a Low Vol effect there is in the data.

The post The Low Hanging Fruit of Low Volatility Backtests appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post Power-Assisted Diversification: A Goldilocks Approach to Benchmark Diversification appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>**Power-Assisted Diversification: Not Too Concentrated and Not Too Diversified**

Dorian (Randy) Young, CFA, CAIA

A number of years ago, as someone proficient with benchmark methodologies, I was asked by an investment measurement firm to consider a challenge they were having with their custom benchmarks. For their purposes, they found one set of their benchmarks too concentrated and another set too diversified. They had not found the “just right” benchmarks.

This is a story of how I developed a methodology that I entitled Power-Assisted Diversification, and how this methodology can be applied to other situations today. And like the story of Goldilocks, this one also has a happy ending.

**The Challenge**

The firm was using two common types of equity benchmarks. One was weighted by each stock’s market capitalization—thus, capitalization weighted (CW). In the other, all stocks were weighted the same—thus, equally-weighted (EW). They were using these custom benchmarks as part of a larger analytical package measuring hypothetical portfolios of stock recommendations, and from their perspective, many of their CW benchmarks were too concentrated (not diversified enough), while their EW benchmarks were too diversified.

For example, a large stock like General Electric (GE) and a stock 1/100th the size of GE would have the same weight in an EW benchmark, while in a CW benchmark, GE would be 100 times the weight of the smaller stock. Their perspective was that because GE was so much larger, it was more important than the small company, and so it should have more weight—but 100 times the weight was too much for their purposes.

In fact, in their CW benchmarks, the ratio of the largest stock weight to the smallest stock weight (L/S Weight Ratio) was commonly above 100 and in some cases exceeding 1000. The firm was more comfortable when the L/S Weight Ratios were in the single digits, a signal which served as a target metric.

**The Solution**

A weighting methodology that produced benchmarks between the too-diversified EW benchmarks and the not-diversified-enough CW benchmarks would seem to satisfy their objectives, especially since other common benchmark design characteristics (e.g. return, risk, turnover) did not need consideration in their design phase given their specific purposes. As there are an unlimited number of mathematical solutions to this situation, the key challenge was to find an elegant one that would be easy to communicate to their clients.

One mathematically elegant property that existed in both their EW and CW benchmarks was a size-weight ratio consistency property. Given the size ratio of two stocks, the benchmark weight ratio was always explicit and independent of the benchmark. For example, if the size ratio of two stocks was 10, then the CW benchmark weight ratio was always 10, the EW benchmark weight ratio was always 1, and the new benchmark weight ratio would always be some number N, where N was between 1 and 10. The desired N would produce L/S Weight Ratios generally in the single digits and certainly less than 20.

At that time, their CW benchmarks contained stocks that ranged in size from less than $400 billion to about $100 million, a ratio of 4000. This ratio served as an upper limit for existing L/S Weight Ratios and was used to calculate an upper limit of N. To keep the new L/S Weight Ratios below 20, the upper limit of N was calculated from the equation

An N was now needed that was greater than 1, less than 2.297, and would serve to make the communication as elegant and as understandable as possible. There was also motivation to choose an N that was not too close to 1, because as N approached 1, the result would be approaching the too-diversified EW benchmarks.

Given these mathematical constraints and design objectives, the obvious choice was N=2. This choice led to the following simple explanation: “If a stock is 10 times the size of another stock, then its benchmark weight will be just 2 times the weight of the other stock—for all stocks in the benchmark, adjusted proportionately.”

Revisiting the General Electric example above, where the two stocks had a size ratio of 100, their benchmark weight ratio would now be 4—not too large and not too small.

The firm found this solution attractive and went forward with this specific methodology.

**The Specific Methodology**

To implement the size-weight ratio consistency, the methodology takes each stock’s market capitalization (MC) and raises it the power of x, which is then used to weight the stocks.

As noted above, both the CW and EW benchmarks have this consistency: CW benchmarks are generated when x=1, and EW benchmarks are generated x=0; two special cases.

For our N=2 case,

and so weighting the stocks in the benchmarks by MC raised to the power of 0.30103 generated the desired result for the firm.

**The General Methodology**

Beyond the specific applications of x=0, x=1, or x=0.30103, there is a more general methodology where x is any number between 0 and 1. In this more general case, the weighting by MC raised to the power of x produces a benchmark more diversified than the CW benchmark but not as diversified as the EW benchmark. Hence, Power-Assisted Diversification.

Power-Assisted Diversification can be further generalized in that it not need be based on a stock’s market cap, but rather Power-Assisted Diversification can be applied to any benchmark weighting methodology, including fundamental weighting, GDP-weighting, etc.

In fact, not only can Power-Assisted Diversification be applied to a benchmark’s weight factor, but it can also be directly applied to any benchmark’s actual weights. Thus, it can be applied without the knowledge of the benchmark’s underlying weighting methodology.

**Case Study: Power-Assisted Diversification Applied to the S&P 500 Index**

To explore how Power-Assisted Diversification can impact a benchmark today, a case study of the S&P 500 Index as of November 30, 2012 is considered.

By sorting the stocks in the index by their weights from largest to smallest, these 500 stocks can be segmented into 10 groups with the same number of stocks in each group (50), thus forming 10 deciles. Decile 1 contains the 50 largest stocks, while Decile 10 contains the 50 smallest stocks. One simple measure of benchmark concentration is the total stock weight contained in Decile 1.

TABLE 1 contains the decile weights for the S&P 500 Index after Power-Assisted Diversification has been applied using several values for x. In the table, the first column lists the deciles, and the next five columns display the decile weights for these five values of x:

- x = 1, which is identical to the CW and the S&P 500 Index itself
- x = 0.5, which is identical to taking the square root of the MC
- x = 0.33333, which is identical to taking the cube root of the MC
- x = 0.30103, which was used for the firm above
- x = 0, which is identical to the EW and the S&P 500 Equal-Weighted Index

The top decile of the S&P 500 Index contains over 50% of the entire benchmark weight, and when this concentration is considered too high, Power-Assisted Diversification can lessen it. In the case of x = 1/2, the top decile concentration decreases to just over 25%, and when x = 1/3, this concentration is just under 20%. Using x = 0.30103 from above, this concentration is about 18%, and when x = 0, the result is the EW benchmark where the 10% concentration is at its lowest.

As part of the Power-Assisted Diversification application, a decision is required regarding which value of x to use. There are two approaches to choosing x: first, a specific value of x (e.g. 0.5) can be used at every rebalancing; second, a varying value of x can be used to achieve a specific result, such as a specific concentration of the top decile—requiring the value of x be backed into. An example of this second approach would be a requirement of the top decile to contain exactly 25% of the total index weight; the final column in TABLE 1 shows the backed-in value of x = 0.48358 in this example does produce the desired 25% concentration.

TABLE 2 displays how these values of x affect the stock weights for the five largest and five smallest stocks (top and bottom percentile) and impact the L/S Weight Ratio. The S&P 500 Index has a L/S Weight Ratio of 352 which declines to 19 when x = 1/2 and declines further to 7 when x = 1/3. This decline would be even more pronounced for all-cap benchmarks.

Many other measures of benchmark concentration can be calculated in this manner, and one of particular interest is the Concentration Coefficient as this calculation includes the weight of every stock in the benchmark. Mathematically, a benchmark’s Concentration Coefficient is the inverse of the sum of all weights squared. Conceptually, a benchmark’s Concentration Coefficient is the number of stocks in a concentration-equivalent EW benchmark. Thus, an EW benchmark’s Concentration Coefficient will equal the number of stocks in the benchmark; and as benchmarks move away from EW and become more concentrated, the Concentration Coefficient will decline.

For example, in TABLE 3 the Concentration Coefficients for the CW (x=1) and the EW (x=0) S&P 500 Index are 121 and 500, respectively. Thus, the CW S&P 500 Index has the same amount of concentration as an EW benchmark with 121 stocks, while the EW S&P 500 Index has the same amount of concentration an EW benchmark with 500 stocks—itself.

Power-Assisted Diversification produces Concentration Coefficients between the CW and EW values. For example, when x = 1/2, it is 351; when x = 1/3, it is 433.

While CW benchmarks are the most widely used benchmarks by investors today, the 21st century has seen a huge growth in non-CW benchmarks—“alternative equity index strategies”—being created and used, particularly in the proliferation of exchange traded funds (ETFs). These alternative index weightings (also known as “alternative beta”, “smart beta” and “better beta”) include EW, fundamental weighting, and many specific to individual ETF managers as more benchmark designers have become aware that the benchmark weighting methodology will have a meaningful impact on important benchmark characteristics, including return, risk, turnover, concentration, and exposure to size, value, and rebalancing premia.

How these characteristics could be impacted by Power-Assisted Diversification would be best understood by historical simulation research, although a rough expectation would be that they would generate results between any benchmark and an EW version of that benchmark. A more refined expectation could come from a 2011 research paper by Research Affiliates. In that paper, several benchmark weightings are compared, include CW, EW, and a method named Diversity-Weighted Equity Indexing that—like Power-Assisted Diversification—seeks a weighting between CW and EW. The Diversity-Weighted Equity Indexing methodology was described in 1995 and 1998 papers and developed by Dr. Robert Fernholz at INTECH using stochastic calculus—which would have rendered this approach too complex for the firm in this story. However, analysis of Diversity-Weighted Equity Index versus CW and EW in the paper can offer a hint as to how Power-Assisted Diversification may impact benchmark characteristics, including return and risk.

Benchmark designers, especially for ETFs, can use Power-Assisted Diversification to help shape the concentration, diversification, and other characteristics of their benchmarks. Additionally, any benchmark that has a high concentration of weight in a country, region, sector, industry, etc., can also have the Power-Assisted Diversification applied to lessen these concentrations.

**Summary**

Power-Assisted Diversification is a simple methodology that can be applied to any non-EW benchmark to shape its diversification and concentration profile and perhaps improve other characteristics, including return and risk. As more benchmarks with alternative weightings are being designed for the ETF industry, benchmark designers have the opportunity to use Power-Assisted Diversification in the development of these new indexes.

In the case of the firm in this story, Power-Assisted Diversification with x = 0.30103 was so attractive to them that they named me their honorary employee of the month—a happy ending.

**Appendix A: Interactive Worksheet Tool**

Table 4 reproduces Table 1 from above in interactive format where you can change the value of X (in the yellow-shaded cell) and view the resulting decile weights.

The post Power-Assisted Diversification: A Goldilocks Approach to Benchmark Diversification appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post The Risk Parity Tower of Babel appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The first several times I heard of or read about risk parity I was puzzled. The media, it seemed, had distilled descriptions of risk parity into some variation on “a leveraged bond portfolio” or “portfolio in which the bonds are leveraged until they have the same volatility as equities.” The first time I grasped what risk parity really is was when I read Chris Levell’s piece for NEPC.

It was clear that risk parity was something broader than as described by the media: *it was simply a portfolio in which the assets’ contributions to variance were equal.* Notice the lack of any mention of the words “bonds” or “leverage” in that definition.

Somehow the attention the investment media pays to risk parity focuses on examples of multi-asset class portfolios that use leverage. In Qian’s earliest writing on risk parity, he described one possible implementation as simply an alternative to a bond portfolio, one with better diversification than a bond-only portfolio.

Bottom line: you can find examples of risk parity portfolios with no bonds and with no leverage, and yet when people talk about risk parity they almost always refer to them as leveraged bond portfolios. The Risk Parity Tower of Babel (RPTB) endures.

**RPTB Comes to Chicago**

I was glad to see that Eugene Fama had agreed to appear at the CFA Institute annual meeting in Chicago this past spring. Fama is always interesting, opinionated, and he does not speak in public often enough. Former students know he is often quite funny, sometimes hilarious. At times taking a class from him seemed like taking a class from Don Rickles.

In class Fama spent considerable time discussing empirical papers. Whenever he wanted to fast-forward the discussion he would just ask, “OK, what’s the punch line?”

**Fama delivers the punch line**

Listen to this interview of Fama at this year’s CFA conference in Chicago.

Even if you’re not a CFA Institute member you may listen to the session in iTunes, and I encourage you to do so.

While it is interesting throughout, the session becomes comical at the 41st minute. Here is my attempt at a transcription:

41:07-41:37

Interviewer: “What do you think of the risk parity asset allocation strategy?”

Fama: “Never heard of it.”

Fama: “What is it…”

Interviewer: “OK…” (laughter) “uh…”

Fama: “…in my terms?”

Interviewer: “I think it’s the idea that you, uh, find a bunch of different asset classes…”

Fama: “mm-hmm”

Interviewer: “…and then use leverage to get them all to the same volatility. Equal risk across different asset classes…”

Fama: “OK.”

Interviewer: “and, uh, Bridgewater has done this very successfully for a long time.”

Fama: “OK. **Stupid!”**

(Loud laughter)

Interviewer: “OK.”

Fama: “If you think about your portfolio problem, you never start with a proposition like that. What you’re thinking about if you’re a mean-variance investor is, how do I form these things to minimize variance? That would not tell you to lever them up all in the same way.”

**Chicago Booth comedy**

One of Fama’s best known students is Cliff Asness, co-founder of AQR and one of the world’s best known hedge fund managers. Asness is also one of the most vocal proponents of risk parity. I assume Fama and Asness are on pretty good terms since in 2004 Asness endowed $1 million to Chicago Booth for classroom in Fama’s name. Fama asking “what it it?” shows that when he and Asness communicate, they’re not talking shop!

**Reconciling “stupid” with non-stupid practitioners**

Fama is correct: A mean-variance investor would never leverage bonds as the interviewer described.

At the same time, Cliff Asness is not stupid, and neither is risk parity. Nor are Ray Dalio, Wai Lee, Bob Prince, or Ed Qian, to name a few of its better known practitioners.

I will attempt to explain why risk parity is not stupid, and reconcile the explanation with Fama’s take. It’s really quite simple:

**Risk Parity investors are not mean-variance investors!**

As Fama elaborated on his answer he qualified his interpretation as that of a mean-variance investor. But a risk parity investor is not a classic mean-variance investor!

**The difference between a mean-variance investor and a risk parity investor**

A mean-variance investor’s objective is to maximize return relative to risk, consistent with his utility for return versus risk. A risk parity investor’s objective is to maximize a specific type of diversification, that being contribution to portfolio variance. By ignoring expected return in his objective function, an risk parity investor is implicitly skeptical or agnostic about modeling expected returns. No “mean” -> no “mean-variance”.

The interviewer’s description was due to the Risk Parity Tower of Babel. I would have preferred the interviewer to have said “risk parity is an allocation method in which the assets’ contributions to portfolio variance are equal. By the way, Cliff Asness is one of its loudest evangelists.”

The post The Risk Parity Tower of Babel appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post A reader asks: Can a risk parity portfolio have short positions? appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>Recently a hedge fund manager contacted me who had been playing with the risk parity workbook. He asked whether it is possible to have a risk parity portfolio with short positions. The short answer is: sometimes.

To test this idea I used the 5-asset risk parity workbook and altered its configuration. I wanted to allow one asset’s weight to be negative; this required I remove the constraint Weight>=0 and add a constraint that the weight be less than zero for the asset I wished to sell short. I was able to obtain risk parity portfolios with Commodities sold short, with REITs sold short, and with Bonds sold short. However I was unable to find risk parity allocations with either US Stocks or International Stocks sold short.

In order to obtain the respective results with short positions in either of Commodities, REITs, or Bonds, Solver did not find the weights without help. I had to play with the starting weights (especially for Bonds), giving Solver a hint.

*NB: The out put from this workbook reflects its inputs, which I made up. Do not mistake them for forecasts or predictions. The numbers are intended as placeholders for your own assumptions. Do not rely on the numbers provided.*

**5-asset Risk Parity portfolio, Short Commodities**

**5-asset Risk Parity portfolio, Short REITs**

**5-asset Risk Parity portfolio, Short Bonds**

The “short bond” portfolio is long 100% risk assets and short nearly 300% bonds. Interestingly its allocation resembles a poor man’s 2x inverse of E-Trac’s risk-off ETN, OFF, short nearly 3x capital in Bonds and long everything else.

**This does not prove risk parity solutions are always possible, never mind advisable**

It really depends on your assumptions about your assets’ covariances. I populated the risk parity workbooks with fictitious data, including a correlation coefficient between US and International stocks of 0.95, which is really more of a “stress environment” correlation coefficient than a typical one. With a lower correlation coefficient between them, the possibility for a risk parity portfolio grows.

**Caveat Venditor!**

While this is a fun diversion, keep in mind that the distribution of returns for an asset sold short is quite different from that of an asset held long. A covariance alone is probably insufficient to capture the risk of a short position. A long position’s worst possible return is -100%; a short position’s is unlimited. It’s easy for inexperienced investors to dismiss that difference, but from a risk management perspective that difference is critical. Further, the dynamics of managing a portfolio with short positions feel backward. “Good” trades result in smaller and smaller positions, which means if you wish to adhere to your risk budget you have to sell short more and more. “Bad” trades grow position sizes, requiring you to buy (gulp) something that has risen in price in order to trim your position.

The post A reader asks: Can a risk parity portfolio have short positions? appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post Why Rational Investors Hold (Some) Volatile Assets appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>For several years I managed quantitative equity portfolios at Freeman Investment Management. The firm had been a pioneer in creating low volatility strategies, both long-only and long-short.

In addition to managing low volatility portfolios, at Freeman we had been advocating using volatility indices instead of style indices, both as performance benchmarks and as explanatory variables in style analysis. We had created and maintained our own analogs to the Russell style indices in which we divided the Russell 1000 and 2000 universes into Low Volatility and High Volatility halves, reconstituting on the same schedule as Russell. The table below offers a performance summary.

Return measure | Freeman 1000 Low Volatility | Freeman 1000 High Volatility | Freeman 2000 Low Volatility | Freeman 2000 High Volatility |
---|---|---|---|---|

Arithmetic mean | 11.93% | 11.96% | 14.62% | 12.31% |

Geometric mean | 11.64% | 10.45% | 14.32% | 9.55% |

Std. Deviation | 13.13% | 19.68% | 15.04% | 24.77% |

Annualized monthly return statistics, 1979 – January 2010. Excludes IPOs. Source: Freeman Investment Management |

As unimpressive as these results are for High Volatility stocks, keep in mind these are well diversified cohorts containing several hundred stocks. The results are even worse for the High Volatility stocks if you slice the universes into volatility deciles.

**Given these results, why would a rational investor hold High Volatility stocks?**

We used to ask clients, consultants, and prospects that question all the time. It was rhetorical, but I believe I now know the answer.

In September 2011 I wrote a post about modeling expected returns. In that post I wrote about the method described by Jacquier, Kane, and Marcus for obtaining an unbiased estimate of return over a forecast horizon of length H, given observations from a sample period of length T.

It surprises many to learn that the unbiased estimation formula is a weighted average of the Arithmetic mean and the Geometric mean of the sample return data, and that the weights depend on the ratio of the length of the future horizon, *H*, to the length of the sample period, *T*:

Expected log return over H periods =

(1- )

where and represent the respective logs of the arithmetic and geometric means of the sample period.

**So what does this have to do with rational investors holding volatile assets?**

This formula implies that, given a long enough forecast horizon H, *all assets with positive volatility have an unbiased expected return that is negative*. Not just High Volatility assets, *ALL* assets. The Arithmetic Mean is always greater than the Geometric Mean, so if you keep extending the forecast horizon H, eventually the unbiased forecast return becomes negative.

When you rebalance a diversified portfolio, you are engaging in volatility capture. So long as you rebalance regularly, as a rational investor you should hold volatile assets in your portfolio; how much depends on your rebalancing and holding period horizons.

The post Why Rational Investors Hold (Some) Volatile Assets appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post Tom Anichini Joins GuidedChoice appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>With as much fun as I’ve been having consulting and blogging for PortfolioWizards, I have decided to accept an offer to join GuidedChoice, a robo-advisor located in San Diego.

GuidedChoice has an accomplished team of professionals including Sherrie Grabot, Harry Markowitz, Ming Yee Wang, and Ganlin Xu. They have worked together for over a decade. I look forward to joining the team and contributing to the firm’s future.

Thank you to all my clients, friends, and supporters.

PS: Just don’t turn off that RSS feed.

The post Tom Anichini Joins GuidedChoice appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>The post Alternative Time Windows for Evaluating Performance appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>Further examining a 200-day Moving Average (200MA) strategy for mitigating downside risk, I was recently examining how the picture changes when you alter the time window for assessing these strategies.

Below is a scatterplot of a dynamically managed strategy using 200MA (vertical axis) versus Buy & Hold (horizontal axis). It excludes assumptions about transaction costs, slippage, and interest on cash.

One day price returns, S&P 500 Stock Index, October 19, 1950 – December 16, 2011

*Source: PortfolioWizards, Yahoo! FinancePast performance does not predict future performance*

The plot above doesn’t tell you much, because you basically see a diagonal line when the 200MA is invested and a horizontal line when it’s not.

Group these trading days into proper calendar months, and the picture shifts a bit.

One month price returns, S&P 500 Stock Index, November, 1950 – November, 2011

*Source: PortfolioWizards, Yahoo! FinancePast performance does not predict future performance*

Now we’re starting to see something vaguely resembling the payoff diagram of an at-the-money call option, but not quite. The one thing that’s clear: this scatterplot has an unequivocal lack of observations in the upper left quadrant, meaning if there’s a return advantage at all it DOESN’T come from positive returns while the market is down; it would have to come from compounding zero or small negative returns when the market is down a lot more.

What if we freed ourselves from examining calendar-based time windows? What if we just looked at “round-turns,” where a “round-turn” is measured over a full cycle of being fully invested and fully uninvested, regardless of how long or short those cycles might be?

When we do this, the picture clarifies.

Round-turn price returns, S&P 500 Stock Index, October 19, 1950 – December 16, 2011

*Source: PortfolioWizards, Yahoo! FinancePast performance does not predict future performance*

This picture is remarkably similar to that of an at-the-money call option. It would appear that the way an effective volatility management strategy depending on 200MA works is it allows you to participate in protracted positive runs and limits your participation in protracted negative runs.

**What are the risks?
**

Several:

- transaction costs might be considerable
- round-turns could be as short as two trading days or as long as several calendar years
- total returns would be different, as these plots ignore the positive returns due to dividend income
- this strategy is vulnerable to price jumps in both directions – it offers no protection for abrupt price drops in tame market environments, and won’t participate in rebounds after protracted market declines
- this discussion is silent on how to implement this information in your asset allocation process, or whether you even should do it at all

The post Alternative Time Windows for Evaluating Performance appeared first on Investment Solutions - Portfolio Construction Experts | PortfolioWizards.

]]>