The post We’re Launching Our 2nd Blog: SKEW appeared first on GestaltU.

]]>Every day, we consume a formidable amount of information. Some of the content is intriguing, some amusing, and some infuriating. And we have thoughts on many of those pieces. Yet, rather than writing on topics outside GestaltU’s characteristic long-form investment research, we have thus far chosen to focus on the single content style that long-time readers have come to expect. In order to maintain such high standards, we’ve rarely ventured outside of those confines. But as we’ve grown as writers, it’s become more difficult to contain the breadth of topics we’d like to post on, and the voice we’d like to write with.

Hence, Skew.

Among the many reasons for launching Skew, we hope to:

**Be respectful of your time.**We know that not everyone has the time or inclination to consume 1,500 word essays. Skew, wherever possible, will present short-form content.**Write in English**. Many of the posts on Gestalt are highly technical and difficult to understand. With Skew, we want to distill long articles down to the salient points and translate technical information into understandable prose with actionable intelligence.**Post more frequently.**It takes a remarkable amount of time to research, write, polish and publish a long-form essay. This is why we’re scarcely able to post more than an 1-2 articles per month. Our goal for Skew is to publish more frequently, on issues that are informative, relevant and timely.**Write a bit off-topic.**Did you see last year’s foray into inane NCAA March Madness pool rules? That kind of post has Skew written all over it!**Clarify our thinking.**We find that writing helps us clarify our own thinking on complex issues. Skew provides us with an additional outlet to help us process our own thoughts on markets, economies and the world at large.

For our long-time readers, let us be clear: **GestaltU** **will continue to bring you the same style of content you’ve come to expect from us.** We’re adding the Skew blog in addition; we don’t expect to draw resources away from the development of our long-form content.

One last thing, since the timing seems so good. We just published our 2015 update to last year’s March Madness blog post. You can find our 2015 update here.

Enjoy!

The post We’re Launching Our 2nd Blog: SKEW appeared first on GestaltU.

]]>The post Winning By Not Losing: Bootstrap Quantile Clouds appeared first on GestaltU.

]]>If you are an average investor with a typically basic understanding of investing, **Rule #1** above will probably make perfect sense. However, if you are a financial professional, you will probably have to read the statement a few times before it becomes clear.

Here’s why: from the moment you enter into the field of finance, you are taught to think of risk in terms of volatility. But in the context of Rule 1, it is impossible to quantify risk in this way. That’s because financial goals are usually framed in terms of target wealth, and wealth outcomes are a function of both volatility and expected returns. As such, risk is the range of wealth outcomes that might be expected at the investment horizon. (Note that, from a financial standpoint wealth and portfolio income are inextricably linked – maximizing one will necessarily maximize the other – so we will focus on wealth. Note also that the sequence of returns matters too, but we won’t address that here.)

Investors will therefore prefer portfolios that provide for the highest expected return, in excess of their minimum required return, under adverse assumptions, and that match their tolerance for failure. Conservative investors may wish to invest in the portfolio that provides the highest expected return above their minimum required return of 3% for example, under the 5% least favourable conditions. More adventurous investors might tolerate a 25% chance that terminal wealth falls below their target. Only investors with lottery-like preferences will choose portfolios based on the best outcomes under the most favourable conditions.

Note that this is further complicated by the fact that investors’ time horizons are, in practice, just a small fraction of their true financial time horizons. For example, a 50 year old investor may have a life expectancy of another 35 years, and wish to leave a legacy amount with a much longer time horizon still. However, data on investor behaviour suggests that this investor is unlikely to stick with a strategy for much longer than 4 or 5 years. Figure 1. from Dalbar’s 2014 Quantitative Analysis of Investor Behaviour report, shows that investors in stock and bond mutual funds tend to stick with their strategy for about 3 years, while diversified investors (‘asset allocation’) have historically held on for almost 5 years.

Figure 1. Average Mutual Fund Retention Rates (1995 – 2014)

From a modelling and planning perspective, there is little value in setting investor portfolio preferences for a 35 year time horizon if they are going to change their portfolio every 5 years or less. Rather, the objective should be to recommend a portfolio that the investor is likely to stick with through thick or thin, but also where the investor is likely to come to the least amount of harm if he or she abandons the strategy after experiencing an adverse period.

The fundamental question to answer is this:

- If an investor can tolerate
*p*probability of not reaching target wealth and; - If an investor’s
*emotional*time horizon before abandoning a strategy under adverse conditions is*y*years and; - The investor requires a minimum return of
*r*to reach his wealth target then; - Which portfolio minimizes the probability that the investor will NOT reach his financial goals

You will note that this question is just a more detailed way to frame **Rule #1**.

There are complex analytical models that can answer this question with great precision, but most have a subtle but important flaw: they assume portfolio returns are independent and identically distributed, and; they assume returns are normally distributed. It turns out that these assumptions are actually not very impactful, and it would have been relatively easy to solve the problem by applying a multi-period Roy’s Safety First model, or through traditional Monte Carlo analysis. But we thought we’d go the extra mile.

Boostrapping is a simulation method, much like Monte Carlo, which provides a way to generate sample paths for wealth. However, where Monte Carlo analysis generates random returns from the normal distribution, bootstrapping generates sample returns from the empirical distribution described by actual historical observations. For example, when creating a random return path for the S&P 500 through bootstrapping, actual observed historical monthly returns from the S&P 500 are drawn at random, usually with replacement. To create a 5 year sample path, you would draw 60 monthly returns; to create 1000 5-year sample paths you would choose 60 monthly returns 1000 times.

Each 5-year sample path represents an alternative history for the S&P 500. As a result, this process illuminates a fascinating and rarely contemplated reality for investors: they have observed just one of an (almost) infinite number of possible trajectories that the S&P 500 might have taken. While the path we observed evokes the only narrative that we can possibly understand, and seems positively *inevitable* in hindsight, any one of the alternative paths we created were equally likely *ex ante*. Only after time has passed, and the actual path has unfolded, can we look back and identify the one line that fits with our experience.

Now let’s look in the other direction – into the future. Remember, before history unfolded, we knew that their was a murky cloud of infinite possibilities for how the S&P 500 could have evolved. We now sit at the beginning of a new timeline that reaches out into the future before us. As such, we look out on the same murky cloud of possibility. Fortunately however, this cloud has structure, and we can quantify this structure in meaningful ways.

Figure 2. is what we call a ‘Quantile Cloud’ for the S&P 500. It is formed by generating 100,000 alternative 5-year paths, randomly drawn from actual monthly **total returns including dividend reinvestment** observed over the period 1880 – 2014, using bootstrap with replacement. Again, bootstrapping preserves the *empirical distribution* of realized returns rather than imposing a distribution on the data.

Figure 2. 5-Yr Bootstrap Quantile Cloud for S&P 500

Let’s take a minute to get to know the Quantile Cloud. Each barely perceptible microscopic line on the chart (not the multi-colored thicker lines) represents 1 path of 100,000. Where the density of lines is high, near the middle of the cloud, the colour shifts into the pink spectrum. Where there are few observations, near the edges of the cloud, the colour shifts into the blue end of the spectrum. The very densest middle of the cloud, highlighted by the green line, represents the median, or average outcome. The individual thin blue lines which can be distinguished near the very top and very bottom of the cloud are statistically possible, but extremely unlikely. The black line running through the middle of the chart represents the ‘break-even’ line; that is, where the portfolio has returned 0% over the period.

The coloured lines on the chart represent the quantile wealth at each horizon. For example, the red line at the very bottom represents the 5th percentile wealth outcome for the S&P 500 each month. Notice that the red 5th percentile and the orange 10th percentile lines fall below the black line at all periods, which means over 5 years there is at least a 10% chance of seeing your wealth fall below starting wealth when investing in the S&P500. In fact, the pale green 20th percentile line falls below the black line for over 30 months, which means there is a 20% change of seeing your wealth fall below starting wealth over almost 3 years.

There is a summary of quantile geometric returns over the full 5 year period at the top left of the chart. Note that the 5th and 10th percentile growth rates are negative, consistent with what we observed above – negative wealth over all periods. All quantiles at the 20th percentile and above show positive returns. The median growth rate is 8.8%.

Now let’s apply this quantile chart in the context of our **Rule #1**. Let’s assume an investor has a history of switching strategies or advisors about every 5 years, which represents her true time horizon. Let’s also assume the investor requires 2.5% return to achieve her minimum target wealth, and can tolerate a 10% chance of failure. How appropriate is a 100% investment in the S&P500 in the context of these preference?

Note that the 10th percentile return from our S&P 500 Quantile Cloud is -1.1% over this investor’s time horizon of 5 years. But the investor requires a 90% chance of achieving 2.5% over 5 years to meet her minimum target wealth. Clearly, the S&P 500 portfolio is not a good match for this investor’s preferences.

If an all stock portfolio will not meet this investor’s objectives, let’s consider some other portfolio options. Figures 3-5 present 5-year Quantile Clouds for a U.S. 60/40 balanced portfolio, the Global Market Portfolio[1], and a Global Risk Parity[2] (Equal Risk Contribution ERC) portfolio respectively, all rebalanced quarterly. All data is total return including reinvested dividends and interest, but not accounting for fees, transactions, or taxes. [See notes [1] and [2] at the bottom of this article for a description of these portfolios.]

Figure 3. U.S. Balanced 60/40 S&P 500/US 10-year Treasury Quantile Cloud

Figure 4. Global Market Cap Weighted Portfolio Quantile Cloud

Figure 5. Global Risk Parity (Equal Risk Contribution) Quantile Cloud

Let’s see if we can come closer to meeting our investor’s preferences with these alternative portfolios. Recall that we are searching for the portfolio which we expect to achieve at least 2.5% cumulative growth over a 5 year horizon, 90% of the time; that is, at the 10th percentile. First note that the median expected returns for US balanced, Global Market Cap, and Global Risk Parity (ERC) portfolios are 7.6%, 5.7%, and 5.3% respectively. All of these portfolios present median expected returns that are lower than the median return from the all-stock S&P 500 portfolio.

Yet, as we glance down the quantile returns a different story emerges. In fact, at the 10th percentile risk tolerance reflected by our model investor, the expected returns to stocks, balanced, global market cap and global risk parity portfolios are -1.1%, 1.4%, 1.9% and 2.5% respectively. The order of returns is reversed. In fact, despite the fact that the Global ERC portfolio reflects the lowest average return over 5 years, it presents the highest return at the investor’s risk tolerance of 10% chance of failure.

The goals of this exercise were threefold:

- First, reframe the primary investment objective in terms of the risk of not achieving a client’s financial target;
- Provide a visual model for the extreme random nature of returns, and;
- Illustrate how high average expected returns do not necessarily mean an investor is more likely to meet their financial goals over a finite horizon

By framing **Rule #1** with Quantile Clouds, we can see how portfolio return and risk interact under a true random process to meet different investor preferences. Obviously there are ways to solve these problems with greater precision, but the intuitive visualization offered by Quantile Clouds may make it easier for clients to understand the risk/return tradeoff, and potential value of global diversification.

**Notes**

1. The Global Market Cap portfolio is an approximation based on USD Global Financial Data extensions for US equities, EAFE equities (manually assembled by Wouter Keller), emerging market equities, U.S. corporate bonds, 10-year U.S. Treasury bonds, U.S. high-yield bonds, international government bonds, REITs, and TIPs. Returns for the period 1880 to the mid 1920s are from S&P 500 and U.S. Treasuries only. TIPs enter the portfolio in 1973. Weights are contemporary market cap weights and do not reflect changes to portfolio constitution that occurred through time. For illustrative purposes only.

*2. The Global ERC portfolio is rebalanced quarterly to Equal Risk Contribution weights for all of the assets described in Note 1. The variance-covariance matrix is formed from rolling 12-month returns.*

The post Winning By Not Losing: Bootstrap Quantile Clouds appeared first on GestaltU.

]]>The post The Narrative is Reality appeared first on GestaltU.

]]>― Umberto Eco, Foucault’s Pendulum

Back in the days when I still thought markets were driven by fundamentals I used to be a big fan of Don Coxe’s monthly commentaries. Don was at the epicenter of the commodity / BRIC narrative, and his commentaries were dense with historical context, pithy quotes, and compelling analysis. He was the progenitor of the concept of a ‘triple waterfall crash’, of which the prolonged bear market in Japanese equities was the prototype. He also coined the rule that investors should focus on areas of the market where, “those who know it best, love it least, because they’ve been disappointed most”. For him, this was the dominant driver of the commodity trade, as long-time CEOs of major mining and energy companies would be slow to bring on new production in the face of high current prices, because they would be terrified that the next secular bear market for those commodities would be just around the corner.

For me in the mid-naughts, Don was the “Guardian of the Trend”. And I was a full-on, card carrying member of this cult.

**Secular Cults**

In secular (that is, non-religious) cults, confidence in the narrative is fuelled primarily by “Confirmation Bias” and the “Illusion of Knowledge”. In other words, people search out as many facts as possible to support their narrative, and believe that more information increases the accuracy of their forecasts. To wit, I became an expert in fields that supported my world view.

I could quote all the relevant stats on Saudi’s oil fields and why we shouldn’t believe them. I knew the NPV of the major Canadian oil-sands companies based on reserves in the ground at a range of long-term oil price forecasts. I tracked the relative cost curve for on-shore vs. deep water drilling, as well as the lease prices for different classes of exploration and production platforms. I watched the crack spread and the term structure of crude futures. I watched Saudi CDS as a leading indicator of oil price movements. I read Dennis Gartman.

In addition, I knew the relative global reserves of potassium and phosphorus and the largest and lowest cost producers. I could cite all manner of statistics on global demographics, and the likely increase in meat consumption due to the rise of the emerging middle class, and implications for corn and oat prices. I watched Caterpillar, Joy Global, and Manitowoc stocks and listened to management calls about the state of construction and mine development in emerging economies. I understood the difference between laterite and magmatic sulphide nickel deposits, and their relative cost structure.

But those are just facts, and facts only serve to *support* the narrative; they have no power in themselves. The narrative is based on *faith, *and all new information is filtered through the prism of that faith. *Faith* in peak oil. *Faith* in China’s emerging middle class need for meat and infrastructure. *Faith* in scarcity. *Faith* in monetary profligacy and the inevitability of inflation. *Faith, faith, faith!*

**Ghosts of Bubbles Past and Present**

In each cycle, there are a handful of pied pipers diligently manning required posts as “Guardians of the Trend”, assuring us that the narrative is reality. In the “Greed is Good” era of the 1980s, the narrative was fed by such characters as Martin Zweig, Michael Milken, and Ivan Boesky, with the fictional “Gordon Gekko” as the archetype.

In the technology bubble the ‘animal spirits’ were lifted by such characters as Henry Blodgett, Mary Meeker, Jack Grumman, and Frank Quattrone. During the emerging markets / commodity bubble the narrative was led by Jim O’Neil (who coined the term BRICs), Jim Rogers, Marc Faber, Peter Schiff, T. Boone Pickens, and Eric Sprott. The coincident housing bubble narrative was fostered by Louis Cavalier, Abby Joseph Cohen, and others.

The new new new narrative of central bank omnipotence has its own cast of “Guardians”. The leading cast members are Paul Krugman, Ben Bernanke, Janet Yellen, Shinzo Abe, Mario Draghi, and perhaps Mark Carney. David Tepper and his ilk are zealous acolytes. The members of the ‘passive posse’ (you know who you are) also play supporting roles by overwhelming protests of ‘asset inflation’, ‘expensive markets’ or ‘low future returns’ with chants of ‘efficient markets, efficient markets, efficient markets’. Paul Samuelson, who stated that markets are “micro efficient” but “macro inefficient” is rolling over in his grave.

Or maybe the current narrative really is ‘the truth’, and the market’s tree really can grow to the sky. I no longer care either way.

**Breaking the Spell**

“One might be led to suspect that there were all sorts of things going on in the Universe which he or she did not thoroughly understand.”

― Kurt Vonnegut, Slaughterhouse-Five

The collapse of the emerging markets / commodity bubble was an intensely traumatic experience, and left a gaping vacuum in my understanding about how the world works. If I knew as much as I did about the markets I was involved with, and the gurus I followed knew as much as they knew, and everyone got the trade completely wrong in the end, what did that mean? I can now empathize (abstractly) with members of doomsday cults who forecast a ‘rapture’ which never arrives.

The months immediately following the Global Financial Crisis were some of the most challenging of my life. I had linked my value as an investment professional to my ability to predict markets based on superior knowledge and understanding. But my superior knowledge and understanding had not translated into better forecasting ability or investment performance. Therefore I had no value as an investment professional. Should I seek out a different career?

But no one I followed – in fact, no one I’d heard of – had been able to do any better. Sure, there were a few ‘gurus’ who nailed the bear market, and a much smaller number that nailed both the bear and the bounce. But their narratives and methods were inscrutable and unconvincing. And as the bull market matured, even these gurus quickly lost their prescience. Which way do you move when you have no direction?

**Tetlock’s Gift**

It was during this period of existential crisis in 2009 that I stumbled upon Philip Tetlock. Of course, this was only partly by accident. At any other previous point in my life I would have tripped over Tetlock, picked myself up and walked on as if nothing had happened. But at that particular time I was an empty vessel, waiting to be filled with a new understanding of the world. I was ready to *receive*.

For those who haven’t heard of Philip Tetlock, and who may be ready to embrace the terrifying (but liberating!) reality that he has validated with his decades long research, I encourage you to read this article, and watch this video.

In 1985 Tetlock, fascinated by his previous experience serving on political intelligence committees in the early 1980s, set out to discover just how accurate expert forecasters were in their predictions of future events. Over a span of almost 20 years, he interviewed 284 experts about their level of confidence that a certain outcome would come to pass. Forecasts were solicited across a wide variety of domains, including economics, politics, climate, military strategy, financial markets, legal opinions, and other complex domains with uncertain outcomes. In all, Tetlock accumulated an astounding 82,000 forecasts.

This represents an incredible body of evidence about expert judgment, and Tetlock’s analysis rendered several astounding conclusions:

- Expert forecasts were less well calibrated than one would expect from random guesses
- Aggregated forecasts were better than any individual forecasts, but were still worse than random guesses
- Experts who appeared in the media most regularly were the least accurate
- Experts with the most extreme views were also the least accurate
- Experts exhibited higher forecast calibration outside of their field of expertise
- Among all 284 experts, not one demonstrated forecast accuracy beyond random guesses

In short, experts would have delivered better forecasts by flipping coins. But there was a silver lining.

Tetlock also tracked some simple, *rules based statistical models* alongside the experts to see if these models would be competitive in terms of forecast calibration. He found that many simple models performed with substantially better calibration than the experts, and delivered accuracy well beyond random chance. For example, models that forecast that the next period would continue the recent trend worked well. Models that forecast a return to a long-term mean also performed well. Chock one up for systematic momentum and value investors.

In any event, you may not be in a cognitive/emotional state where the facts presented above can be assimilated into your worldview. If you are a research analyst that is paid to make predictions about whether to buy or sell a stock at the current price, or an economist who is paid to forecast interest rates or oil prices, then these facts undermine your ability to make a living. Few people can operate for long with this level of cognitive dissonance, so you will necessarily ignore these conclusions.

Many readers are in a self congratulatory mood since their ‘long and strong’ equity bets have paid off so handsomely over the past five years or so. Self attribution bias prevents these readers from understanding that their success is due to luck, not skill. As such, they probably are not ready to receive Tetlock’s wisdom either. These readers may regret their hubris in the depths of the next crisis, when they are 25% to 50% poorer. C’est la vie.

If however, like me in 2009, you are disenfranchised with the analysts on TV, and in the newspaper, and at your firm, opining ad nauseum on topics they can’t possibly forecast, and with no accountability, then Tetlock’s wisdom might just be the right medicine at the right time. Who knows, perhaps five or six years down the road, you may find that you’ve reconstructed yourself in many delightfully unexpected ways, and produced some pretty neat new ideas based on concepts that actually work.

Of course, the market narrative exists whether you pay attention to it or not. But when you embrace the great unknown, you’re able to disengage and observe the mania for what it is: the Jungian collective unconscious acting to manifest its own destiny. It’s a movie playing out in real time in front of your eyes, with all the humour and drama and, eventually, tragedy that makes any great movie worth watching. And just as any faith requires sacrifice, most of these participants sacrifice a meaningful slice of their savings in order to avoid the physical pain of social exclusion. As such this drama, which is no more or less than the moment to moment expressions of faith by millions of market participants, fed by oscillating tantrums of greed and fear filtered through a grimy prism of amorphous identity and values, presents incredible opportunities for the enlightened few.

The post The Narrative is Reality appeared first on GestaltU.

]]>The post Your Alpha is My Beta appeared first on GestaltU.

]]>Incorrect casual use of the term alpha

This complaint may stem from the statistician in me, but the casual use of the term alpha irritates me quite a bit. Returning to very basic regression techniques, the term alpha has a very specific meaning.rp = α + β1 r1 + β2 r2 + β3 r3 + … + εAlpha is just one of the estimated statistics of a return attribution model. The validity of the regression outputs, whether parameter estimates such as alpha or various betas, or risk estimates such as standard errors, depend on the model used to specify the return stream. Independent variables should be chosen such that the resulting error residuals cannot be meaningfully explained further by adding independent variables to the regression. In the most prevalent return attribution model, the typical one factor CAPM model, returns are explained by one independent variable – broad market returns.Defining an appropriate return attribution model is necessary to estimate a manager’s alpha. I find it ironic that the use of the term alpha is most frequently applied to a subset of asset managers called hedge funds where defining the return attribution model is often the hardest. Long-short equity managers can display non-constant beta as their net exposures change. Fixed income arbitrage managers typically display very non-normal return distribution patterns. Managed futures traders can capture negative coskewness versus equity markets that provide additional benefits beyond their standard return and risk profile. Calculating these managers’ alpha is a difficult task if for no other reason that specifying the “correct” return attribution model is problematic.Consider the specific example of a hedge fund manager whose net exposure is not constant. In this case, a one factor market model is not necessarily optimal and other factors such as the square of market returns might need to be added to account for time varying beta. If a manager makes significant use of options, the task of specifying a proper model becomes even harder. Also, consider a manager whose product specialty is volatility arbitrage and an appropriate market benchmark may not be available. How then to estimate alpha?I prefer using the term “value-add” to be a generic catch-all for strategies that increment a portfolio’s value. Whether that incremental value is generated though true alpha, time varying beta, short beta strategies with low return drag, cheap optionality, negative coskewness to equity markets, or something else that is not able to be estimated directly from a return attribution model, it saves me from having to misuse the term alpha.

Lars raises great questions about the relevance of alpha derived from a linear attribution model with Gaussian assumptions when a strategy may exhibit non-linear and/or non-Gaussian risk or payoff profiles. Unfortunately, this describes many classes of hedge funds. While this is true, his comments took me in a different direction altogether.

It’s interesting to contextualize alpha not just in terms of the factors that an experienced expert might consider, but rather in terms of what a *specific* *target* investor for a product might have knowledge of, and be able to access elsewhere at less cost. In this way, a less experienced investor might perceive a product which harnesses certain non-traditional beta exposures to have delivered ‘alpha’, or more broadly ‘value added’, where an experienced institutional quant with access to inexpensive non-traditional betas would assert that the product delivers little or no alpha whatsoever.

Let’s start with the simplest example: imagine a typical retail investor who invests through his bank branch. A non-specialist at the bank branch recommends a single-manager balanced domestic mutual fund, where the manager is active with the equity sleeve, exerting a value bias on the portfolio. The bond sleeve tracks the domestic bond aggregate. The fund charges a 1.5% fee.

Subsequently, the investor meets a more sophisticated Advisor and they briefly discuss his portfolio. The Advisor consults his firm’s software and determines the fund’s returns are completely explained by the bond aggregate index returns, domestic equity returns, and the Fama French (FF) value factor. In fact, after accounting for these factors, the mutual fund delivers -2% annualized alpha.

The Advisor suggests that the client move his money into his care, where he will preserve his exact asset allocation vis-a-vis stocks and bonds, but invest the bond component via a broad domestic bond ETF, and use a low-cost value-biased equity ETF for the equity sleeve. The Expense Ratio (ER) of the ETF portfolio is 0.1% per year, and the Advisor proposes to charge the client 0.9% per year on top, for a total of 1% per year in expenses. The Advisor, by identifying the underlying exposures of the client’s first fund, and engineering a solution to replicate those factors with lower cost, has generated 1% per year in alpha (1.5% mutual fund fee – 1% all-in Advisor fee + 0.5% by eliminating the negative mutual fund alpha).

At the client’s next annual review, the Advisor recommends that the client diversify half of his equities into international stocks, at a fee of 0.14%. An unbiased estimate of non-domestic equity returns would be similar to domestic returns, minus the 0.6*0.5*(0.14-0.1) = 0.012% increase in total portfolio fees. However, currency and geographic diversification are expected to lower portfolio volatility by 0.5% per year, so the result is similar returns with lower risk = higher risk adjusted returns = higher value added = higher alpha.

After another year or so, the new Advisor discusses adding a second risk factor to the equity sleeve to compliment the existing value tilt: a domestic momentum ETF with a fee of 0.15%. Based on the relatively low correlation between value and momentum tilts (keeping in mind they are all long domestic equity portfolios), the Advisor believes the new portfolio will deliver the same returns over the long-run, but diversification between value and momentum tilts will slightly reduce the portfolio volatility, by another 0.2%. Same returns with less risk = higher alpha.

At each stage, the incremental increase in returns and reduction in portfolio ‘beta’ (vis-a-vis the original fund) results in a higher ‘alpha’ for the client. Now obviously the actions that the Advisor took are not traditional sources of alpha – that is, they are not the result of traditional active bets – but they nevertheless add meaningful value to the client.

Now let’s extend the logic into a more traditional institutional discussion. The institution is generally applying attribution analysis for one or both of the following purposes. The two applications are obviously linked in process, but have substantially different objectives.

- To discover how well systematic risk factors explain portfolio returns over a sample period. For example, we might determine that a long-short equity manager derives some returns from idiosyncratic equity selection, some from the Fama French value factor, and some returns from time-varying beta. If we hired the manager for exposure to these factors, this would confirm our judgement. Otherwise it might prompt some questions for the manager about ‘style drift’ or some other such nonsense.
- To determine if a manager has delivered “value added”, or alpha. For example, perhaps the manager delivered excess returns, but we discover that the excess returns can be explained away by adding traditional Fama French equity factors to the regression. Since it is a simple and inexpensive matter to replicate these risk factor exposures through ‘passive’ allocations to these factors (using ETFs or DFA funds for example), it might be reasonable to discount this source of ‘value added’ for most investors, and trim the alpha estimate accordingly.

This should be pretty straightforward so far. Using a long-short equity mandate as our sandbox, we discussed how a manager’s returns might result from exposure to the FF factors, time-varying exposure to the market, and an idiosyncratic component called alpha. But now let’s get our hands dirty with some nuance.

Let’s assume the long-short manager has been laying on a derivative strategy with non-linear positive payoffs. Imagine as well that a wily quant suspects he knows the method that the manager is using, can replicate the return series from the derivative strategy, and includes this factor in his attribution model. Once this factor is added, the manager’s alpha is stripped away. While the quant may feel that there is no ‘value add’ in the derivative strategy because he can replicate it for cost, surely an average investor would have no way to gain exposure to such an exotic beta. As such, the average investor might perceive the strategy as ‘value added’, or ‘alpha’ while the quant would not.

Ok, let’s back out the derivative strategy, and assume our long/short manager exhibits positive and significant alpha after standard FF regression. In other words, the manager’s excess returns are not exclusively due to systematic (positive) exposure to market beta or standard equity factors, such as value, size, or momentum. Of course, since it is a ‘long/short’ strategy, the manager can theoretically add value by varying the portfolio’s aggregate exposure to to the market itself. When he is net long, the strategy should exhibit positive beta risk, and when he is net short it should manifest negative beta risk. How might we determine if this time-varying beta risk explains portfolio returns?

Engel (1989) demonstrated how regressing portfolio returns on squared CAPM returns will tease out time-varying beta effects. So let’s assume that adding a squared CAPM beta return series to the attribution model explains away a majority of this ‘alpha’ source. Therefore, including this factor in the model increases the explanatory power (R2) of the model, and reduces the alpha estimate. But is this fair or relevant in the context of ‘value added’? After all, while we can say that the manager is adding value by varying CAPM beta exposure, we have not demonstrated how an investor might generate these excess returns in practice. I have yet to see a product that delivers the squared absolute returns of CAPM beta, but please let me know if I’ve missed something.

I submit that it’s useful to identify the time-varying beta decisions for attribution purposes. This source of returns may represent true “value add” or (dare I say alpha), because it can not (presumably) be inexpensively and passively replicated by the investor. To the extent an investor is experienced enough, and/or sophisticated enough to identify factors which can inexpensively replicate the time-varying beta decisions (such as via bottom-up security selection, or top-down timing models), then, and only then, might the investor discount this source of ‘value added’.

So far we’ve discussed hypothetical examples, but a recent lively debate on APViewpoint is a great real-life case study. Larry Swedroe at Buckingham has long militated against traditional active management in favour of DFA style low-cost factor investing. It took many by surprise, then, when Larry wrote a compelling argument for including a small allocation to AQR’s new risk premia fund (QSPIX) in traditional portfolios. After all, at first glance this fund is a major departure from Larry’s usual philosophy, with high fees, and leveraged long and short exposures to a wide variety of more exotic instruments. Thus ensued 100 short dissertations from a host of respected and thoughtful Advisors and managers on APViewpoint’s forum about why the fund’s leverage introduces LTCM style risk; why the factor premia the fund purports to harvest can not exist in the presence of efficient markets, and; why the fund’s high fees present an insurmountable performance drag.

Notwithstanding these potentially legitimate issues, I’m uniquely interested in how one might view this fund in terms of alpha and beta. The fund’s strategy involves making pure risk-neutral bets on well documented factors, such as value, momentum, carry, and low beta, across a variety of liquid asset classes. In fact, AQR published a paper describing the strategy in great detail. Presumably even a low-level analyst with access to long-term return histories from the factors the fund has exposure to could explain away all of the fund’s returns. From this perspective then, the fund would deliver zero alpha. However, it is far easier to gather the return streams from these more ‘exotic’ factors than it is to operationalize a product to effectively harvest them. So for most investors, this product represents a strong potential source of ‘value add’.

The goal of this missive was to demonstrate that, when it comes to alpha, where you stand depends profoundly on where you sit. Different investors with varying levels of knowledge, experience, access, and operational expertise will interpret different products and strategies as delivering different magnitudes of value added. At each point, an investor may be theoretically ‘better off’ from adding even simple strategies to the mix, perhaps at lower fees, and even *after* a guiding Advisor extracts a reasonable fee on top. More experienced investors may be able to harness a broader array of risk premia directly, and thus be willing to pay for a smaller set of more exotic risk premia.

It turns out that ‘alpha’ is a remarkably personal statistic after all.

The post Your Alpha is My Beta appeared first on GestaltU.

]]>The post Dow 20,000: Is 2015 the Year? appeared first on GestaltU.

]]>But I digress. We don’t make predictions on this blog, but it is constructive to understand generally what the range of probable outcomes might be. Is our hero, Dr. Siegel, taking a brave stand against the bearish hordes, or is he making safe proclamations from behind a sturdy statistical moat? We aim to find out.

First, the low hanging fruit. What is the unconditional probability that the Dow Jones Industrial Average, which currently sits at approximately 18,000, closes out 2015 above 20,000? First, let’s assume that returns are normally distributed and *iid*. Next, let’s take long-term average (arithmetic) U.S. stock returns to be 5.3% per year (this is the average 12 month arithmetic price-only returns to U.S. stocks from the Shiller worksheet – remember, index returns do not include dividends), with annual standard deviation of 20%.

If the mean annual return to the price index is 5.3%, then the unbiased expected value of the Dow at the end of 2015 is 18,000 * 1.053= 18,950. A finish at 20,000 would represent a return of 20,000/18,000 = 0.111 or 11.1%, which is 11.1% – 5.3% = 5.8% more than expected. Given the standard deviation of returns is 20%, this represents a 5.8/20 = 0.29 standard deviation event. We can now apply the cumulative normal distribution function to determine the probability of a positive 0.29 sd event. In Excel, it is

1 – NORM.S.DIST(0.29,TRUE) = 0.386, or 38.6%

So the unconditional probability that the Dow closes at 20,000 or greater at the close on the last trading day of 2015 is almost 40%. This is not quite a coin toss, but Jeremy is not exactly going out on a limb.

Keep in mind that stock market price returns *approximate* a geometric random process. They don’t just climb in a steady curve, and close each day at a new high. Surely Jeremy would take credit for his “Dow 20,000″ call if the index exceeds the magical 20,000 threshold *at any point* during the year, even if it doesn’t actually finish the year above this level. For simplicity however, let’s just examine the probability that it *closes* above 20,000 on any trading day of the year; so we won’t take into account intra-day periods.

Recall that if the annualized return is 10%, then the expected return at the close on day 1 is (using a 252 trading day year):

1.053^(1/252)-1 = 0.0002, or 0.02% with a range of 20% * sqrt(1/252), or 1.26%

Were the Dow to close at 20,000 on trading day 1, that would represent an 11.1% return in 1 day. Given the 1 day expected return is 0.02%, with a 1 day SD of 1.26%, this would be a (0.111 – 0.0002) / 0.0126 = 8.8 standard deviation event. The probability of a positive 8.8 sd event under a normal sample distribution is a decimal number preceded by 20 zeroes. Essentially no chance.

But that’s just on day 1. What about on day 63, which is about 3 months into the year?

The expected return after 63 days is 1.053^(63/252)-1 = 1.3%, with a standard deviation of 20% * sqrt(63/252) = 10%. Were the Dow to have risen 11.1% to close at 20,000 on trading day 63 (about the end of March), that would represent a (0.11 – 0.013)/0.1 = 0.98 standard deviation event. The probability of a positive 0.98 standard deviation event is about 16.3%. Now are are talking a 1 in 6 chance that the Dow hits 20,000 at the end of March, the same odds as throwing a 6 on a standard die.

The following chart was formed by performing essentially the same analysis at each daily period, and shows the probability that the Dow will meet or exceed 20,000 at the close of each sequential trading day of the year. We highlighted the 16.3% probability at a 3 month horizon described above for illustrative purposes.

Figure 1. Probability of Dow > 20,000 at each sequential trading day of 2015

We now know the probability of the Dow closing above 20,000 on any given day, but we still haven’t answered the question, “What is the probability that the Dow closes at or above 20,000 at any time in 2015?” To answer this, first consider Figure 2, which shows just 20 of the virtually infinite number of possible paths for the Dow over the next year, given our mean return and standard deviation assumptions.

Figure 2. Sample paths for the Dow in 2015

By visual inspection we can see that a substantial portion of the potential paths in Figure 2 cross above 20,000 at some point during the year. We ran a Monte Carlo simulation of 1 million possible paths, and discovered that about 64% of paths would cause the index to rise above 20,000 at some point during the calendar year.

Particularly astute readers may recognize that the former problem, where we solved for the probability of a price exceeding a specific value at a certain point in time, is a problem of similar nature to that of solving for the value of a European call option, which can be exercised only at expiration. This problem has a known closed-form analytical solution. In contrast, the latter problem has elements that are similar to finding the value of an American call option, which can be exercised at any time up to and including expiration. This problem has no known closed-form solution, and must be solved numerically or by simulation, such as our Monte Carlo method.

It’s critical to understand the random element in stock market activity so that we don’t get so emotionally attached to silly milestones. There is a 64% chance that the media and the top 0.01% will be able to break out party hats and champagne this year to celebrate an arbitrary milestone in a poorly constructed index. Siegel isn’t making a bold statement; far from it. Rather, he is playing the (unconditional) odds. And that is precisely what you should do as an investor.

The question is: do you feel lucky? We can think of a few reasons why you shouldn’t feel so sanguine, and might humbly suggest a better way of thinking about markets anyway.

The post Dow 20,000: Is 2015 the Year? appeared first on GestaltU.

]]>The post A Century of Generalized Momentum appeared first on GestaltU.

]]>To this end, about two months ago we were honoured when Wouter J. Keller, CEO of Flex Capital and Professor Emeritus at Vrije University Amsterdam, invited us to collaborate on a paper exploring a new heuristic optimization, Elastic Asset Allocation, which follows from his previous work on Flexible Asset Allocation (FAA) and the more esoteric Modern Asset Allocation (MAA). These are excellent contributions to the existing literature on dynamic asset allocation, and complement our Adaptive Asset Allocation concept.

From the abstract of the new paper:

“In this paper we generalize [Flexible Asset Allocation] FAA, starting from a tactical version of Modern Portfolio Theory (MPT) proposed in Keller (2013). Instead of choosing assets in the portfolio by a weighted ordinal rank on R, V, and C as in FAA, our new methodology – called Elastic Asset Allocation (EAA) – uses a geometrical weighted average of the historical returns, volatilities and correlations, using elasticities as weights.”

We hope you enjoy the read. The full paper is below.

A Century of Generalized Momentum (Wouter and Butler, SSRN Id2543979)

The post A Century of Generalized Momentum appeared first on GestaltU.

]]>The post Measuring Tactical Alpha, Part 2 appeared first on GestaltU.

]]>Figure 1. Performance comparison of Global Tactical Asset Allocation products vs. ETF Proxy Global Market Portfolio, Jun 1, 2011 – Nov 28, 2014

Figure 2. Performance comparison of global risk parity products vs. ETF Proxy Global Market Portfolio, Jun 1, 2011 – Nov 28, 2014

Analysis: GestaltU, Data from Yahoo Finance and Bloomberg

A few notes about these tables. First, where stats are labeled (Incep), they are calculated from June 2011, or the product’s inception if it launched subsequent to that date, through the end of November 2014. Second, CAGR numbers are annualized, except where a fund has been operating for less than 1 year. All risk-adjusted performance numbers are annualized from daily data, regardless of the length of track record (daily ratios are multiplied by sqrt(252)). Betas, alphas and t-scores are all since inception, and all relative metrics (IR, alpha, beta, t-scores) are relative to the Global Market Portfolio and based on daily observations.

So what story do these tables tell? Well, first off the Global Market Portfolio hasn’t been a tough bogey to beat in terms of raw returns over the past three years or so, with less than 6% annualized returns. For comparison, the S&P (SPY ETF) has returned over 16% annualized over the period, and a US balanced fund (Vanguard US Balanced ETF) has gained 11% per year. Bear in mind US markets represent over 30% of the global index, so international diversification has been quite a performance drag.

I know many of you with US-centric portfolios are patting yourself on the back. Ain’t self attribution bias grand? Make no mistake, you are US-centric because of home market bias, not superior forecasting abilities, but I will be the first to admit that it’s better to be lucky than smart. I can state with some confidence that US-centric investors are unlikely to experience the same relative success over the next three years. If that’s the case, what are you going to do about it?

In terms of returns relative to the GMP, GTAA funds are a mixed bag. The fund with the highest returns appears to be SMIDX, the SMI Dynamic Allocation fund, but this is somewhat of a red herring because the fund has less than 1/2 the operating history of most other funds. On a risk adjusted basis, JP Morgan’s Efficiente (EFFE) mandate has delivered the highest risk adjusted performance, in terms of Sharpe, Sortino, and Omega over the entire observation period. More importantly, given its low beta and high alpha scores, EFFE has generated its returns with very little reliance on performance from the underlying indexes. This is a critical point, as funds with a high correlation to the GMP are vulnerable to a negative shift in performance when global markets turn at the end of this cycle.

Investor legend Rob Arnott’s GTAA behemoth, PAAIX, managed under the PIMCO banner, deserves an honourable mention. It also surpassed the GMP’s Sharpe ratio over the past few years, and delivered the second lowest alpha and beta of any fund, despite lower absolute returns.

We included the Good Harbor Tactical Core US fund in our analysis, despite the fact that it is US focused, because it highlights the risk of trying to market time strictly between the stocks and bonds of one market. **This is the difference between market timing and GTAA: you make just one bet.**We deal with this concept in more detail in our new paper (see below). In our testing, we’ve observed that market timing between stocks and bonds or stocks and cash is a much more difficult challenge than spreading bets across multiple asset classes, and Good Harbor’s unfortunate recent performance lends credence to our own findings.

Given higher average structural allocations to bonds in risk parity funds, products in this class have clearly benefitted from the global race to the bottom in long rates, as average Sharpe ratios are meaningfully higher than average GTAA Sharpe ratios. I strongly suspect this will reverse when the rate cycle finally turns (which admittedly could be quite a while). Setting aside QSPIX for a moment as a special case, note that Invesco’s Balanced Risk portfolio sports the highest Sharpe, Sortino, and Omego ratios over the past 3+ years, as well as the lowest beta and highest alpha. This is a large fund, with $10 billion in AUM according to Morningstar, yet it continues to deliver stellar returns year after year. Not for nothing, it has also generated the highest annualized returns over this recent period.

We mentioned QSPIX is a special case, and it is. This fund, managed by AQR’s esteemed Andrea Frazzini and Ronen Israel, is based on a concept described in a 2012 paper by Antti Ilmamen, Ronen Israel, and Tobias Moskowitz, entitled “Investing with Style: The Case for Style Investing” (currently behind AQR paywall). Antti Ilmamen is one of the greatest investment thinkers alive today, and his books are required reading for every aspiring asset allocator. The authors present compelling evidence of the magnitude, persistence, and structurally low correlations, of the four primary sources of style premia: value, momentum, carry and ‘defensive’. Across all asset classes covered, the authors demonstrate that style premia correlations averaged -0.22, and ranged between -0.6 and +0.21 from 1990 – 2012. Long-term Sharpe ratios for style premia composites across all asset class buckets range from 0.9 for value to 1. 37 for carry over the same period. In simulation, when normalized to a 10% volatility, a combination style premia composites across all asset classes delivered a Sharpe of 2.52 before fees and expenses.

Of course, the authors are aware of the many frictions and pitfalls involved in implementing the strategy, so they included an analysis of the net historical performance after accounting for trading costs (Sharpe declines to 1.9); discounting for model overfitting (Sharpe declines to 0.98), and; risk-management and fees (Sharpe ratio declines to 0.85). This seems to be to be quite a conservative target (see Figure 5.)

Obviously, given the low expected average correlation with traditional 60/40 portfolios, and the high expected Sharpe ratio, QSPIX should substantially improve overall portfolio Sharpe, even with small allocations. For example, a 10% allocation to QSPIX carved out of a 60/40 portfolio might raise overall Sharpe from 0.3 to 0.44, according to the authors.

Overall, I’d say the short snapshot of performance we’ve seen over the past year since inception would not cause me to reject the possibility that QSPIX will deliver against expectations. However, the fund may be mildly vulnerable to liquidity shocks, as it has a gross leverage ratio of 8x (!!), so it should not play the role of a tail hedge in portfolios. In my opinion, the best structural tail hedge is a good CTA fund.

So what can we conclude from our analysis? This article wasn’t meant to recommend, or point fingers, at any particular strategy, but rather to highlight how we might think about the performance of global allocation funds, and what observed performance features might make them attractive. Above all, before committing any capital to these products, we would focus our scrutiny on the process underlying the strategy. What factors do the managers believe are driving returns? What evidence do they have that their methodology is effective? We would want to see much longer trading histories, analyze performance in multiple trading regimes, and understand how the strategy might interact with other holdings in portfolios. Where a long-term live history isn’t available (or even if one is available), we would be keen to see simulations of historical performance using the same process, and understand all the ‘moving parts’ that might affect the character of the strategy.

That said, if we only have live returns to go on, we would focus on performance relative to the only true passive global benchmark, the GMP, rather than making comparisons with specific regional indexes. Specifically, we would seek to harvest as much true alpha as possible relative to the GMP, as strategies with high alphas are less reliant on strong global market performance to deliver returns. After all, aren’t we after diversification? Next we would look at overall risk metrics, especially volatility, but with one eye on drawdowns and beta. Only then would we start to care about absolute returns and Sharpe ratios.

One other metric, Omega ratio, stands out as meaningful, since unlike all of the other performance metrics above, it makes no assumptions about the distribution of returns. The utility of Sharpe, Sortino, alpha, and beta all depend on the assumption of normally distributed returns, but Omega accounts for the fact that returns often stray far from normality, especially over shorter horizons. The formula for Omega ratio looks fancy, but it’s actually easy to calculate. First, since the Omega ratio reflects the relative probability of achieving returns above a minimum required return (MRR), we must first choose an MRR. We chose to use the risk free rate, which is currently zero, and which makes our calculations really easy. But here is the general formula in Excel-friendly language.

Omega={SUM(IF(returns>MRR,returns-MRR))/(SUM(IF(returns<MRR,MRR-returns)))}

Note that the returns variable refers to the vector of returns, so this is a matrix formula. In order for Excel to calculate it, you must hold down both the CTRL key and the ENTER key at the same time.

In any event, you will note that on this measure, and relative to a 0% risk free rate over the period studied, GTAA funds compare favourably relative to the GMP, almost across the board. This suggests that, after accounting for higher moments of the return distributions, an investor would have a higher probability of achieving positive returns using GTAA than the GMP. An interesting observation indeed.

Overall, there are a few worthy examples of successful GTAA mandates and several risk parity products worth considering for active global diversification. I should also mention that Meb Faber’s Cambria has recently launched a very interesting new GTAA ETF, GMOM, based on newer additions to Meb’s ubiquitous paper, “A Quantitative Approach to Tactical Asset Allocation“. Well worth a look.

Lastly, we are excited to get our own GTAA track record audited so that we can add our own numbers to this list as we launch our new firm, ReSolve Asset Management, in the new year.

**In the meantime, we would encourage those who are interested in global allocation strategies to give our new Tactical Alpha paper a read. Again, this is unique offer to our blog’s readers, since we’ve yet to distribute this widely. We believe it provides a strong argument for investors to consider a larger allocation to active asset allocation strategies in general. You’ll be granted immediate access to a pre-release copy here.**

The post Measuring Tactical Alpha, Part 2 appeared first on GestaltU.

]]>The post Measuring Tactical Alpha, Part 1 appeared first on GestaltU.

]]>*Grinold linked investment alpha and Information Ratio to the breadth of independent active bets in an investment universe with his Fundamental Law of Active Management. Breadth is often misinterpreted as the number of eligible securities in a manager’s investment universe, but this ignores the impact of correlation. When correlation is considered, a small universe of uncorrelated assets may explain more than half the breadth of a large stock universe. Given low historical correlations between global asset classes in comparison with individual securities in a market, we make the case that investors may be well served by increasing allocations to Tactical Alpha strategies in pursuit of higher Information Ratios. This hypothesis is validated by a novel theoretical analysis, and bolstered by two empirical examples applied to a global asset class universe and U.S. stock portfolios.*

**UPDATE: THE PAPER IS PUBLISHED! We would encourage those who are interested in global allocation strategies to give our new Tactical Alpha paper a read. We’ve yet to distribute this widely. We believe it provides a strong argument for investors to consider a larger allocation to active asset allocation strategies in general. You’ll be granted immediate access to a pre-release copy here.
**

In the meantime, it’s no secret that we are big fans of active asset allocation, which is sometimes called ‘tactical alpha’. Our enthusiasm stems from the following observations from our own research, and from other published sources:

1) Asset allocation dominates portfolio outcomes. From an empirical standpoint, Kaplan demonstrated that policy asset allocation explained, on average, 104% of institutional portfolio performance. In other words, portfolios would be better off, by about 4% (note: not 4 percentage points) by sticking to passive exposures for each reference portfolio sleeve than by allocating to active managers. From a theoretical standpoint, Staub and Singer demonstrated that asset allocation explains 65% of orthogonal portfolio breadth, while security selection accounts for just 35%.

2) Asset risk premia are extremely time-varying, such that asset classes can underperform vs. long-term means for extended periods of time, sometimes decades.

3) Asset returns and covariances exhibit persistence in the short-term which enables economically significant forecasting of one or both parameters over short and intermediate horizons. This means it’s possible to systematically alter allocations to asset classes through time to take advantage of time-varying premia. (See AAA whitepaper here)

Note that we are specifically interested in products which allocate to a broad universe of global asset classes, as domestic balanced funds are not really balanced at all.

A variety of firms have launched strategies to take advantage of dynamic risk (volatility or covariance) forecasts, dynamic return forecasts, or both. Quite loosely, we might call global asset allocation strategies that only rely on dynamic risk estimates ‘risk parity’ products, while strategies that incorporate short-term tactical shifts are Global Tactical Asset Allocation (GTAA) products. Some products, like Invesco’s Balanced Risk product, are hybrids, in that they are primarily risk parity strategies, but engage in mild tactical tilts as well.

The question is, how should we evaluate the performance of these strategies? It hardly makes sense to mark them against the S&P500, as these funds are responsible for allocating across global assets, not just US stocks. Should a global asset allocation mandate be considered a failure because it fails to keep up with US stocks when they are the best performing global asset class? What about a domestic balanced fund? Again, it’s silly to benchmark a global mandate against a purely domestic asset universe.

To us, the only reasonable passive benchmark for a global asset allocation portfolio is the Global Market Portfolio, which we described in our post, A Global Market Benchmark with ETFs and Factor Tilts. You’ll find a version of the proposed benchmark in Figure 1., with one small change: we replaced IGOV and BNDX with BWX in order to have a sufficient history of daily returns for our analysis. This removes the allocation to international corporate bonds altogether, and replaces it with an allocation to global government bonds, but results should be pretty close. As it is, we could only go back to mid-2011, but it will suffice for illustrative purposes.

Figure 1. Global Market Portfolio proxy with ETFs

Figure 2. Performance of ETF proxy Global Market Portfolio

Source: GestaltU, data sourced from CSI data and Bloomberg

A short history, to be sure, and we are certainly not suggesting the performance is indicative of what anyone should expect over the long term. We could (and might) backtest the portfolio using Global Financial Data indexes, but it isn’t relevant to this analysis.

Our interest is in quantifying the degree of out- or under-performance observed in global allocation strategies relative to this true passive benchmark. There are a number of ways to measure this, of course. The simplest method would be to compare returns, but of course this metric tells us nothing about the likelihood that the returns were due purely to random chance. **For example, a strategy which delivered a total return of 40% over the past 3+ years, but with a volatility of 20%, could easily have just been lucky. After all, a strategy with an expected return of 0%, but a standard deviation of 20%, has a 95% range of returns of +57% / -57% over any random 3-year period.** In this context, the 40% total return to the strategy observed over 3 years (1.12^3 – 1) would not prompt a statistician to reject the hypothesis that the mean return is actually zero. In fact, there is a 15% chance that the true mean return of the strategy is actually negative, despite the apparent strength. Such is the counterintuitive nature of random processes.

The next most straightforward method is the Sharpe ratio, which describes the excess returns (above the risk free rate) per unit of risk, where risk is defined as volatility. At the very least, this ratio captures some of the randomness embedded in the returns process, so it is much less likely to be a function of good luck than raw returns. However, the Sharpe ratio is also not without some flaws. First, Sharpe ratios measured over short, or even intermediate-term periods on passive strategies rarely reflect the true nature of the strategy. That’s because asset returns *and* volatilities are highly non stationary. That is, while an asset may exhibit very high returns with low volatility for a few years, this observation offers few clues about the true risk and return qualities of the asset. For example, the average 60-day volatility of returns for US stocks in the three years prior to 2008 was less than 15%, but in 2008 60-day realized volatility on the S&P 500 spiked above 80.

Sharpe ratios are also problematic when used to evaluate relative performance across strategies in the short term. The current environment is a great case in point. The Vanguard US Balanced Fund ETF has a Sharpe ratio of 1.19 over the same period in which the GMP delivered a Sharpe ratio of 0.69. Does this mean a US Balanced Fund is the most efficient portfolio? Or does it mean it has been a lucky period for US-centric investors? Almost certainly it is the latter.

For the reasons outlined above (and others beyond the scope of this article), the investment industry has eschewed absolute measures like returns and Sharpe ratios when quantifying the value of active strategies. Rather, sophisticated Advisors are interested in the true measure of skill: alpha.

Alpha is maybe the most misrepresented quantity in finance. It is most often cited as the absolute excess returns above a benchmark index, so that if a US large cap benchmark delivered 10% returns in a period, and a large-cap equity mutual fund delivered 12% in the same period, many people would say the manager had delivered 2% of alpha. This is (often egregiously) incorrect. The true definition of alpha is the excess return from a strategy that can not be explained by the strategy’s sensitivity to an underlying benchmark (or other factors):

The critical component is β, which is a function of the correlation of the strategy with the benchmark, and the strategy’s relative volatility. So a strategy’s beta is high if it is highly correlated with the benchmark, and it has a relatively high volatility. Conversely, the beta is low if it has a low correlation with the benchmark *or* it has a low relative volatility.

Beta is traditionally determined by regressing the returns to the strategy on the returns to the benchmark because this method also, quite conveniently, also yields the alpha. After all, alpha is simply the intercept term in the linear regression. Of course, there is a random component to alpha, as with all analysis of financial time series, so it helps to know the statistical significance of alpha. This is measured as a standard t-score, which allows us to source a probability value that the strategy’s alpha is statistically significant. This t-score is a direct function of both the number of observations, and the magnitude of alpha, so we can be more confident of alpha being a reflection of skill vs. luck with a longer observation horizon, or if the magnitude of alpha is quite large.

Lastly, some might be concerned with a strategy’s information ratio (IR). Recall that the IR is also measured against a benchmark, and tracks the strategy’s excess returns above the benchmark’s returns per unit of tracking error. It is essentially a relative Sharpe ratio, where the return series used to calculate Sharpe is the daily returns to the strategy minus the daily returns to the benchmark.

Many institutions who entertain adding global active asset allocation products focus narrowly on IR as their primary performance metric, but we are less sure of its utility. The institutions we have spoken to are interested in the IR of the active product relative to their reference portfolio, but this places a high degree of confidence in the reference portfolio as the optimal long-term asset allocation. As stated above, given the rather extreme time-varying nature of asset class premiums, and the wide range of possible future returns from a given reference portfolio, it seems silly to be too concerned with tracking error vs. this portfolio. If the reference portfolio turns out to be sub-optimal, an active global asset allocation strategy will be evaluated on its tracking error vs an unpalatable benchmark. This seems counterintuitive to us. Nevertheless, it is worth examination.

In Measuring Tactical Alpha Part 2, we will continue to examine these performance measures and others, and look at how select Global Tactical Asset Allocation products stack up against the Global Market Portfolio. We’ll also make our new paper available. As we said, stay tuned.

The post Measuring Tactical Alpha, Part 1 appeared first on GestaltU.

]]>The post Factors: An Essential Part of Any Nutritious Portfolio appeared first on GestaltU.

]]>Unfortunately, Dr. Ang and Dr. Raymond had less than an hour to talk, so they just touched on a few broad themes. Dr. Raymond moderated the session, and opened with a brief discussion of Dr. Ang’s case study on factor investing at the CPPIB in the early 2000s. Dr. Ang began his session by showing us a video he had made to market the book, which uses nutrients’ role in nutrition as the metaphor for factors’ role in investing. Clearly, a healthy diet requires consideration of the nutrient mix in the foods we eat, not just a superficial knowledge of the basic food groups. In the same way, thoughtful investing requires managers to ‘look through’ superficial asset class labels to the factor exposures that drive their risks and returns.

Dr. Ang used Private Equity as an example of how many investors misguidedly believe they are achieving diversification by investing in alternative asset classes. He demonstrated how CPPIB, for example, decomposed Private Equity (specifically LBO) into a levered equity position and a short credit position. For similar reasons, different classes of hedge funds can’t all be lumped together under a broad ‘alternatives’ banner, but rather their sources of returns must be decomposed to understand the true risk they contribute to a portfolio.

Dr. Ang went on to cover the now ubiquitous equity market factors, value, size, and momentum, often marketed as ‘smart beta’ by the ever ingenious financial marketing industry. He reinforced, as we have so many times on this blog (see here, here, and here for examples), that the majority of active manager returns can be explained by these factors (and of course, Fama and French are out with a new 5 factor model which obviates the old 3 factor model). In fact, these factors explain over 100% of most managers’ returns (See Blake here and here, Crane and Crotter here, Fama here, Ferri and Benke here, Vanguard here, SPIVA here, etc.). In other words, most managers *destroy* value relative to what they could have harvested with systematic exposure to robust equity factors.

But I digress.

In discussing how to most effectively harvest equity factor bets with Dr. Ang after the seminar, he said he favoured a portable alpha framework, where market (beta) neutral factor portfolios are layered over ultra low-cost market-cap weighted stock and bond market exposures. He lamented the fact that there are no low cost market neutral factor ETFs available to investors (QuantShares charge ~1.4% net for their versions, which Dr. Ang felt was egregious).

I was curious to hear from Dr. Raymond and Dr. Ang about how well the CPPIB’s factor framework, which is based on fundamental (i.e. theoretical) financial and macroeconomic factor decompositions, predicts ex post factor attribution derived through time series regressions. The consensus from Don and Andrew was that, while liquid assets with regular pricing history are well explained by a factor framework, less liquid assets are not explained very well due to lagged pricing, smoothing, and sparse/non-existent data. I found this interesting, as they approach macroeconomic factors from a different perspective than we have proposed in the past (see discussion of clusters and principal portfolios in our article on Robust Risk Parity).

This dovetails with many of the discussions we’ve had with institutions over the past year or so about how they manage diversification through a factor lens. Senior members of investment committees seem open to factor approaches based on theoretical macro drivers, such as what we described here, but resistant to quantitative methods. This is a constant source of curiosity for us given the strong results we have observed from quantitative methods.

I’m only 8% of the way through Asset Management (per Kindle), but I can already highly recommend it for financial professionals and interested laymen alike. The book covers every major, and most minor, topics related to investment management, not just factors, and couches concepts in approachable case studies. It’s rare to encounter a text that achieves both breadth and depth in a sophisticated technical domain, but which is so accessible to non specialists.

The post Factors: An Essential Part of Any Nutritious Portfolio appeared first on GestaltU.

]]>The post Published: Path Dependency in Financial Planning, Retirement Edition appeared first on GestaltU.

]]>Taxes & Wealth Management: Path Dependency in Financial Planning, Retirement Edition

The post Published: Path Dependency in Financial Planning, Retirement Edition appeared first on GestaltU.

]]>