<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-14484258</id><updated>2024-03-08T11:01:38.771-08:00</updated><title type='text'>The Art of Streetplay</title><subtitle type='html'>Thoughts on qualitative and quantitative finance. Will evolve towards deep value investing.</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default?alt=atom'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><link rel='next' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default?alt=atom&amp;start-index=26&amp;max-results=25'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>56</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-14484258.post-115146704814828178</id><published>2006-06-27T19:39:00.000-07:00</published><updated>2006-06-27T22:27:05.100-07:00</updated><title type='text'>Poking Holes in Bogle&#39;s Pro-Cap Weighting Rationale</title><content type='html'>Surprise surprise... John Bogle is putting down fundamental indexing in favor of, you guessed it, what has made him rich-- cap weighted passive indexing.  And he got Burton Malkiel to back him up and give him a sense of credibility-- not too different from WisdomTree getting Siegel on board. Must be right if they&#39;ve got an academic on board!&lt;br /&gt;&lt;br /&gt;I thought I&#39;d hone in on a few aspects of his argument and then share some personal thoughts.&lt;br /&gt;&lt;br /&gt;Aspect #1:&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;&quot;&lt;/span&gt;&lt;font&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;First let us put to rest the canard that the remarkable success  of traditional market-weighted indexing rests on the notion that markets must be  efficient. Even if our stock markets were inefficient, &lt;span style=&quot;font-weight: bold;&quot;&gt;capitalization-weighted  indexing would still be -- must be -- an optimal investment strategy&lt;/span&gt;. All the  stocks in the market must be held by someone. Thus, investors as a whole must  earn the market return when that return is measured by a capitalization-weighted  total stock market index. We can not live in Garrison Keillor&#39;s Lake Wobegon,  where all the children are above average. For every investor who outperforms the  market, there must be another investor who underperforms. Beating the market, in  principle, must be a zero-sum game.&quot;&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;This does make some sense-- it is true that (1) the market return from one instant to the next is, technically, the return generated by a capitalization-weighted total stock market index.  It is also true that (2) beating the &quot;market&quot; is a zero-sum game.  If one makes the claim (as they did) that the market is inefficient, though, the flaw in the logic above is that claims (1) and (2) imply the market &lt;span style=&quot;font-style: italic;&quot;&gt;must&lt;/span&gt; be the an optimal investment strategy.  If, for whatever reason, there are at times deviations from intrinsic value, and one is able to, probabilistically or otherwise, construct a strategy which takes advantage of mean reversion of stocks to their intrinsic value over time, then one could theoretically outperform the market (with a few more conditions).  Because beating the market return is zero-sum, yes, you would be earning a profit at the expense of another market participant, and yes, it is notoriously difficult to beat the market after fees over time.  The only thing I am saying here is it is a logical fallacy to say that given (1) and (2) are true, then even if the market is inefficient, cap weighting &lt;span style=&quot;font-style: italic;&quot;&gt;must&lt;/span&gt; be an optimal investment strategy-- it is NOT a logical fallacy, however, to make the deduction that cap weighting MAY be an optimal investment strategy.&lt;br /&gt;&lt;br /&gt;Aspect #2: Expenses-- Management Fees, Turnover, Taxes (Capacity)&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;&quot;&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Purveyors of fundamentally weighted indexes also tend to charge  management fees well above the typical index fund. While index funds also incur  expenses, they are available at costs below 10 basis points. The expense ratios  of publicly available fundamental index funds range from an average of 0.49%  (plus brokerage commissions) to 1.14% (plus a 3.75% sales load), plus an  undisclosed amount of portfolio turnover costs.&quot;&lt;br /&gt;&lt;br /&gt;&quot;&lt;/span&gt;&lt;font&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Fundamental weighting also fails to provide the tax efficiency of  market weighting.&quot;&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Later in the article they delve into some of the conditions which would allow one to make the claim that investing in cap weighted indexes is the optimal investment strategy.   First off I would say that again, they are definitely right. Cap weighting has a bunch of natural advantages.  They are easy to construct, they require no turnover and hence no transaction cost and no manager who takes a fee for himself, and they are tax efficient.&lt;br /&gt;&lt;br /&gt;The question I ask again is, does this lead to the natural deduction that cap weighting MUST be the optimal strategy?  Perhaps I&#39;m wrong, but I don&#39;t believe so.  What it implies to me is the crux of the argument for active management-- as one deviates from investing in the market to increase your portfolio&#39;s allocation to a security you believe to be mispriced, you run up against a number of frictions: (1) all the time you are spending to figure out whether that security is, in fact, mispriced-- the cost of time (which you may or may not outsource to a money manager for a fee); (2) transaction costs to invest in that security; (3) tax inefficiencies; (4) other (ie. market impact, etc).  These are real costs.&lt;br /&gt;&lt;br /&gt;So the fundamental question is: &lt;span style=&quot;font-weight: bold;&quot;&gt;in the face of probabilistic inefficiency&lt;/span&gt; (which is all Arnott and Siegel claim), &lt;span style=&quot;font-weight: bold;&quot;&gt;is &quot;market noise&quot; of a large enough magnitude and does it mean revert quickly enough for it to be worthwhile to incur the incremental costs necessary to generate those returns?&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;I am not saying anything which hasn&#39;t been said a million times in all likelihood.  Arnott&#39;s paper in the FAJ allocated large chunks of space to the time series of the returns relative to the market returns, how &quot;market-like&quot; those returns were, what capacity was available to the trading strategy, the effect of the trading strategy on volatility, and the incremental costs involved assuming turnover at a certain rate.&lt;br /&gt;&lt;br /&gt;In other words, he was making an apples to apples after-transaction-cost comparison between his strategy and the market return, and he found that his trading strategy outperformed over time robustly enough that the probability of overfitting was minimal.  That is pretty valid-- this article applied no such rigor.  I don&#39;t blame it (this is the WSJ we&#39;re talking about), but nevertheless, it does not conclusively disprove the assertions made in Arnott&#39;s paper.&lt;br /&gt;&lt;br /&gt;&lt;font&gt;Aspect #3: Increase in Efficiency&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;&quot;&lt;/span&gt;&lt;font&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;We concede that there is some evidence, based on numbers compiled  by Ibbotson Associates, that long-run excess returns have been earned from  dividend-paying, &quot;value&quot; and small-cap stocks -- albeit returns that are  overstated by not taking into account management fees, operating expenses,  turnover costs and taxes. But to the extent that investors are persuaded by  these data, the premiums offered by such stocks may well now have been  &quot;arbitraged away&quot; in the stock market, as price-earnings multiples have become  extremely compressed.&quot;&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;font&gt;This is a valid point, not out of line with the logic in the &lt;font&gt;fundamental question: in the face of probabilistic inefficiency (which is all Arnott and Siegel claim), is &quot;market noise&quot; of a large enough magnitude and does it mean revert quickly enough for it to be worthwhile to incur the incremental costs necessary to generate those returns?&lt;br /&gt;&lt;font&gt;&lt;br /&gt;It isn&#39;t enough to make the claim that the market is noisy-- the noise must be large enough and mean reverting enough.  If, through the influx of hedge fund investment and everything else, the market is more efficient than it was, then perhaps even though a risk-reward favorable trading strategy existed in the past, we wouldn&#39;t see returns nearly as large going forward-- especially after fees.&lt;br /&gt;&lt;br /&gt;I have commented on the outperformance of value of late in &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/09/commentary-on-trouble-with-value.html&quot;&gt;Commentary on The Trouble With Value&lt;/a&gt;, based on GMO&#39;s piece a while back.  It is true- the run value has had of late is now getting long in the tooth.  The outperformance of value relative to growth has averaged out to a certain level over time, and we are now well above that average.  Reversion to the mean would imply value may have a more difficult time going forward.&lt;br /&gt;&lt;br /&gt;Claim #4: New paradigms don&#39;t tend to last&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;&lt;p&gt;&quot;We never know when reversion to the mean will come to the various  sectors of the stock market, but we do know that such changes in style  invariably occur. Before we too easily accept that fundamental indexing --  relying on style tilts toward dividends, &quot;value&quot; and smallness -- is the &quot;new  paradigm,&quot; we need a longer sense of history, as well as an appreciation that  capitalization-weighted indexing does not depend on efficient markets for its  usefulness.&lt;/p&gt; &lt;p&gt;While we have witnessed many &quot;new paradigms&quot; over the years, none  have persisted. The &quot;concept&quot; stocks of the Go-Go years in the 1960s came, and  went. So did the &quot;Nifty Fifty&quot; era that soon followed. The &quot;January Effect&quot; of  small-cap superiority came, and went. Option-income funds and &quot;Government Plus&quot;  funds came, and went. High-tech stocks and &quot;new economy&quot; funds came as well, and  the survivors remain far below their peaks. Intelligent investors should  approach with extreme caution any claim that a &quot;new paradigm&quot; is here to stay.  That&#39;s not the way financial markets work.&quot;&lt;/p&gt;&lt;/span&gt;This is theoretically a slightly different, broader slant from Aspect #3. This isn&#39;t necessarily making the claim that markets have secularly gotten more efficient in general.  It is more making the claim that over time, two things tend to happen at different points in time for a variety of reasons-- (1) fads develop and (2) systematic statistical patterns form.  Neither persist over time, and it is so unlikely that you will be able to know when they pop or decrease to statistical insignificance that it isn&#39;t worth the costs necessary to act on the information.  They give a bunch of examples of (1).  I would posit an example of (2) to be the incredibly large serial autocorrelation detected in the market indices by Andy Lo.  It really did exist.  They pointed it out, people got excited and probably traded on it a bunch, probably made some good money, and then over time, it decreased in absolute value to the point that it is no longer profitable to trade on relative to sticking all that money in an index fund.&lt;br /&gt;&lt;br /&gt;Fair enough.  What is the &quot;truth&quot;?  Will the magnitude of this aberration decrease over time, as more people catch on, to the point that it isn&#39;t worth the incremental costs, or not?  You know, they very well could be right.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Personal Thoughts&lt;/span&gt;&lt;br /&gt;My belief is that this is less a statistical anomaly than the Andy Lo autocorrelation phenomenon was.  It is lower resolution, it takes longer to realize the abnormally positive returns, it requires patience and a smidge of contrarianism.  These are all qualities which would allow it to persist longer than other phenomena would.  The big irony of it all, however, is this-- if Siegel is able to convince enough investors that he is, in fact, on the right side of this debate, the influx of capital may itself cause the anomaly to disappear!  This stems from one unquestionable truth-- beating the market return is a zero sum game, and the market return is the return on the cap weighted total stock market index.  If the majority of investors believe they will beat the market return by investing in fundamental indexing, they will have to earn their above market return at the expense of other market participants-- but those market participants aren&#39;t anywhere to be had.  Those abnormal returns exist because the &quot;market&quot; has allocated funds in a particular way over the history of the stock market.  If the &quot;market&quot; were to no longer allocate funds that way, perhaps we would have the indirect benefit of an overall better functioning economic system, but directly, the market, as a whole, cannot escape the market return. &lt;span style=&quot;font-weight: bold;&quot;&gt;If everyone believes something to be true, you cannot earn abnormal returns off of it.&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;The other aspect which I personally grapple with is Aspect #3.  As trite as it may sound, we have seen a hell of a rally in value relative to growth.  The outperformance of value is now above the mean.  Has the influx of capital to professional money managers made the pricing of stocks relative to each other more efficient?  If so, the returns of an investment strategy which worked when the investing landscape was not riddled with value managers may not be applicable to the world we will see over the next 20 years.&lt;br /&gt;&lt;br /&gt;I no doubt believe that the market is noisy, as Siegel puts it.  But that alone is not enough to make fundamental indexation &quot;work&quot;.  For it to &quot;work&quot;, there needs to be sufficient noise and mean reversion to compensate for the costs incurred.  From the point of view of someone today investing over the next 10 years, that is a difficult tradeoff for anyone to say definitively will go one way or the other, in my opinion.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Money Managers Have a Place in This World&lt;/span&gt;&lt;br /&gt;A final thought is in order on the topic of market efficiency, and professional money managers.  They do have a place in this world.  Just think about it: if all money was invested in index funds, who would set the value of the individual stocks which comprise the S&amp;amp;P?  We need stock pickers!  More than that, they deserve compensation for providing efficiency to the price of stocks!  Without individuals estimating the intrinsic value of stocks, the market system breaks down, because its whole purpose in the paradigm of the financial markets is to allow companies to raise capital efficiently.  If they did not do that, there would be no need for the stock market at all.&lt;br /&gt;&lt;br /&gt;The question is not whether they should exist or not-- the question is what is the just compensation they deserve relative to the amount of efficiency they can provide to the market.&lt;br /&gt;With all this talk of index investing, I get a good feeling inside knowing I might have a place in this world-- as an allocator of efficiency capital.  Great.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/115146704814828178/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/115146704814828178' title='5 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/115146704814828178'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/115146704814828178'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2006/06/poking-holes-in-bogles-pro-cap.html' title='Poking Holes in Bogle&#39;s Pro-Cap Weighting Rationale'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>5</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-115083054413245343</id><published>2006-06-20T10:41:00.000-07:00</published><updated>2006-06-20T18:23:43.416-07:00</updated><title type='text'>WisdomTree Update, June 20th 2006</title><content type='html'>Needless to say, a lot has happened since my last post, and since I first started writing about WisdomTree in &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/08/taking-look-at-index-development.html&quot;&gt;April 2005&lt;/a&gt;.   The 20 ETF&#39;s have officially &lt;a href=&quot;http://biz.yahoo.com/bw/060619/20060619005629.html?.v=1&quot;&gt;launched&lt;/a&gt;.  All trade on the NYSE under a variety of tickers- DTN, DLN, ..., all of which are listed on the up and running &lt;a href=&quot;http://www.wisdomtree.com/wt_etfs.asp&quot;&gt;website &lt;/a&gt;they now have. One of the commenters on this blog completely nailed the launch date. They have brought on board yet another BGI veteran, Bruce Lavine.&lt;br /&gt;&lt;br /&gt;Rather than spell out everything that is easily available to the public, it might be of value to analyze what is going on one level deeper.&lt;br /&gt;&lt;br /&gt;(1) &lt;span style=&quot;font-weight: bold;&quot;&gt;WSDT is leveraging its star power and university environment very effectively&lt;/span&gt;.  It has done this in 3 ways- 1) it has obtained ETF&#39;s licenses much more quickly than I thought would be possible, 2) it has gotten discount or free advertising all over the WSJ, CNBC, on the floor of the NYSE, and elsewhere, and 3) it has gotten heavily discounted research and development aid from students at the University of Pennsylvania and Wharton, through a class offered called the &lt;a href=&quot;http://fap.wharton.upenn.edu/&quot;&gt;Wharton Field Challenge&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;This cost structure really doesn&#39;t need much capex at all to fuel itself.  The management team is also most likely compensating itself in a call option-type fashion than anything else.   We will look at the economics later.&lt;br /&gt;&lt;br /&gt;(2) &lt;span style=&quot;font-weight: bold;&quot;&gt;The Expense Ratios seem low to me&lt;/span&gt;.  The expense ratios range from 28 to 58 bps, but the &quot;bread and butter&quot; fund, in my opinion, seems to be the Total Dividend Fund which charges 28, and DIEFA, which charges 48.  The rest are probably better looking on the backtests (as weighting to small caps increases, and as they squeeze for more yield), but I am unsure about their merits relative to what is currently in the marketplace.  As Luciano, their head of research &lt;a href=&quot;http://www.marketwatch.com/News/Story/Story.aspx?guid=%7B3D985A43%2D4591%2D4A96%2D8E9C%2D96F527F7A8B7%7D&amp;source=blq%2Fyhoo&amp;amp;dist=yhoo&amp;siteid=yhoo&quot;&gt;said himself&lt;/a&gt;, what is lacking in the marketplace today are indices which are broader, more representative indices which fill larger asset allocation needs.   What is not lacking are &quot;one-off&quot; products that may be seen as tricky, clever attempts to game the system... but a Small Cap Dividend Fund may fall into that category itself.   We will factor this into the economics later.&lt;br /&gt;&lt;br /&gt;(3) &lt;span style=&quot;font-weight: bold;&quot;&gt;They are 100% playing the &quot;fundamental indexation&quot; theme, which has been &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/12/response-to-gavekals-indexation.html&quot;&gt;beaten&lt;/a&gt; &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/12/taking-another-look-at-arnott-why-not.html&quot;&gt;to&lt;/a&gt; death &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/07/indexation_14.html&quot;&gt;on&lt;/a&gt; this blog&lt;/span&gt;. Siegel mentioned it in his piece in the WSJ, and it has been mentioned many other times since.  As such, they are essentially piggy-backing off of a wave which really truly originated with Bob Arnott, off of which 2 companies have already put out ETF&#39;s.  My hypothesis is that they started with a Dividend index, and not one of the other perhaps more &quot;expected&quot; fundamental metrics, because Siegel, their Director of Research, has already done quite a lot of work on dividends, which means there may be cost factors involved.  In a &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/09/ixdp-is-now-wisdomtree-investments.html&quot;&gt;prior post&lt;/a&gt; on this blog, I mentioned a study that he had done a while back, but concluded that the dividend space was too crowded for this to be a likely ETF candidate (oops).  My bet is they either won&#39;t have to pay a licensing fee, or the licensing fee is greatly reduced, because Siegel can claim that this is all simply an extension of prior work that he has done, which gives him a claim on said work.  If this is true, then he gets all the advertising and education benefits of the &quot;fundamental indexation&quot; wave-- which I am sure that he, Arnott, Steinhardt, and others in the pseudo-active ETF space now intend to drive into the heads of common investors around the globe-- without having to pay for it.  If it works, maybe he can release other indices based on other fundamental metrics later.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;The Economics of an Investment in WSDT&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;I talked about the probable cost structure for WSDT in prior posts.  Most specific talk of it is in the oldest post-- basically their revenue model is the expense ratio.  Barclays charges something like 70 bps on a whole bunch of its indices, while the Spiders goes down to like 12 basis points.  WSDT seems to be in the middle.&lt;br /&gt;&lt;br /&gt;So you&#39;d make an assumption on what WSDT&#39;s &lt;span style=&quot;font-weight: bold;&quot;&gt;weighted average expense ratio&lt;/span&gt; is (if the split is 50/50 domestic international flagship ETF&#39;s, that would imply 38 bps). They typically get paid Monday through Friday if PowerShares was any indication, so the cash flow is a very slow and steady function of the assets under management.  They may or may not have to pay a &lt;span style=&quot;font-weight: bold;&quot;&gt;license fee &lt;/span&gt;(variable cost), then they pay for &lt;span style=&quot;font-weight: bold;&quot;&gt;listing on the NYSE&lt;/span&gt; (&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;fixed and variable cost), and they pay for the &lt;span style=&quot;font-weight: bold;&quot;&gt;traders &lt;/span&gt;who construct the indices which most likely track computer-generated output of what it is the portfolio should look like with say a 5% leeway. They pay &lt;span style=&quot;font-weight: bold;&quot;&gt;transaction costs &lt;/span&gt;(variable cost).  They will also pay for a &lt;span style=&quot;font-weight: bold;&quot;&gt;sales force &lt;/span&gt;(pseudo-variable cost), through which they intend to open themselves up to new investment channels.  Other than that the biggest costs are for the &lt;span style=&quot;font-weight: bold;&quot;&gt;management team&lt;/span&gt;.  If you look at the pedigree of their management team (many guys who were heavily involved with the launch of the BGI suite), there is no way they are getting much in cash- they are probably accepting a call option-type compensation package-- variable cost.  &lt;span style=&quot;font-weight: bold;&quot;&gt;Research &lt;/span&gt;is probably not expensive at all because of student help, and marketing is probably much cheaper because of their star management.  Main other costs I would imagine are &lt;span style=&quot;font-weight: bold;&quot;&gt;consumer education&lt;/span&gt;, maintaining a &lt;span style=&quot;font-weight: bold;&quot;&gt;website &lt;/span&gt;and &lt;span style=&quot;font-weight: bold;&quot;&gt;logistical and administrative expenses&lt;/span&gt;.&lt;br /&gt;&lt;br /&gt;They are &quot;competing&quot; against a handful of other ETF&#39;s which already have fundamental indexation products on the market-- I&#39;ve talked about many of them on my blog, but they include 2 that were put out and are paying licensing fees to Arnott as well as the products put out by Powershares, now a sub of Amvescap. I know that WSDT intends to release a bunch of other non-dividend products (isn&#39;t focusing on being a dividend ETF co.), but I would be surprised if they were to sway too far from the fundamental indexation theme (aka piggybacking Arnott). &lt;script&gt;&lt;!-- D([&quot;mb&quot;,&quot;  The key swing factors from my point of view are as follows: &lt;ol&gt;&lt;li&gt;How much of mutual fund and hedge fund assets will end up in the hands of ETF products, synthetically or directly, when ETFs represent ~$420bn in assets, mutual funds ~$8T, and hedge funds ~$1.25T?  \n&lt;br /&gt;&lt;/li&gt;&lt;li&gt;Will Wisdom Tree win out over the host of other ETF products attacking the same market?&lt;/li&gt;&lt;/ol&gt;The basic calculus-- if 25% of assets in funds right now are paying excessive, tax inefficient fees with inefficient portfolio construction, and come to the realization that they are doing so over the next 5 years, and if, in that period of time, (1) companies can release the education necessary to educate the market and (2) companies can create the platforms which can provide investors easy, tax efficient access to these products, and (3) WSDT is able to get 20% of those assets, it will have around $500bn in assets under management.  At 80 bps, its top line is $4bn. If management fees, transaction costs, licensing fees and other variable costs knock off 60 basis points, then at a 35% tax rate it will be making around $650mm in profit. That profit will be a bit cyclical but in general pretty high quality so lets say slap a 15 multiple on it-- market cap of $9.75T. Its market cap right now is $440mm, implying an annualized return of 90%.  So what is the likelihood of this happening? \n&lt;br /&gt;-Dan&lt;/div&gt;&quot;,1] );  //--&gt;&lt;/script&gt;&lt;br /&gt;&lt;br /&gt;The &lt;span style=&quot;font-weight: bold;&quot;&gt;key swing factors &lt;/span&gt;from my point of view are as follows:&lt;br /&gt;&lt;ol style=&quot;font-weight: bold;&quot;&gt;&lt;li&gt;How much of mutual fund and hedge fund assets will end up in the hands of ETF products, synthetically or directly, when ETFs represent ~$420bn in assets, mutual funds ~$8T, and hedge funds ~$1.25T?&lt;br /&gt;&lt;/li&gt;&lt;li&gt;Will Wisdom Tree win out over the host of other ETF products attacking the same market?&lt;/li&gt;&lt;/ol&gt;The driving force behind (1) is the sad fact that 80% of mutual funds underperform the market.  And by market I mean the S&amp;P.&lt;br /&gt;&lt;br /&gt;More broadly speaking, the driving force behind (1) is the sad fact that it is very difficult to beat the &quot;market&quot;, period.  It takes a lot of work.  And when you throw hedge funds into the equation, most of the hedge funds that do consistently outperform are either lucky or are not open to new investment.  Of the hedge funds that are open to new investment, a good proportion of them are probably receiving compensation that is not in line with their ability to generate risk adjusted returns.   Niederhoffer&#39;s Matador fell 30+% in the month of May alone. I am sure they are suffering from redemption issues.   There were a slew of other funds which have closed after the recent market weakness.  And I am sure there are many other investors who are looking at these funds closing, looking at their own investments and scratching their heads at why they are paying so much themselves (200 basis points and 20% of profits) when their hedge fund investments, which were supposed to be resilient on the downside, have fallen far more than the market has.&lt;br /&gt;&lt;br /&gt;There is nothing new in the fact that mutual funds underperform.  Academic studies have been done, etc etc.  It boils down to one real question-- if 80% of mutual funds underperform the market and mutual funds charge 150 basis points, and there are ETF&#39;s which have shown an ability to outperform the market over time which also have deep capacity for investment and charge a 80% less than mutual funds, the current aggregate allocation of funds may consider changing!&lt;br /&gt;&lt;br /&gt;So there are reasonable arguments for individuals in both camps to perhaps consider ETF&#39;s in some shape or form.&lt;br /&gt;&lt;br /&gt;I will sound crazy for proposing the numbers below, but remember that I am looking at this from a 5 year perspective.  In 5 years, either the paradigm shifts, or this company is probabilistically dead.  I factor in probabilistic death into the upside through a setting, at the end, of the probability that paradigm shift does not occur.  Adjust the market size, costs, margins as you wish... but I would hope that the underlying model is more or less representative.&lt;br /&gt;&lt;br /&gt;The basic calculus-- if 25% of assets in funds right now are paying excessive, tax inefficient fees with inefficient portfolio construction, and come to the realization that they are doing so over the next 5 years, and if, in that period of time, (1) companies can release the education necessary to educate the market and (2) companies can create the platforms which can provide investors easy, tax efficient access to these products, and (3) WSDT is able to get 20% of those assets, it will have around $500bn in assets under management.  At 38 bps, its top line is $1.83bn. With the SPY as a guide, transaction costs are probably around 12 bps for WSDT.  Licensing fees is a wildcard.  Sales commission and management expenses may be another 5 bps (or $242mm, split between ~10 hotshot (greedy) managers and a salesforce of maybe 30 highly successful guys), just to throw out a number.  The other costs will probably become more variable-- research maybe $1mm, listing probably cost them $200k per ETF initially plus maybe 1bp of ongoing costs, website + non-exec admin + consumer education maybe another $80mm.  Because IXDP emerged from a dead company, there may be some tax benefits, so perhaps slap on a 20% tax rate. &lt;span style=&quot;font-weight: bold;&quot;&gt;This implies recurring net profit in the upside case of around $700mm. &lt;/span&gt;That profit will be a bit cyclical but in general pretty high quality so lets say slap a 15 multiple on it-- market cap of $10.5bn. Its market cap right now is $314mm, implying an &lt;span style=&quot;font-weight: bold;&quot;&gt;annualized return of 100% for 5 years&lt;/span&gt;.&lt;br /&gt;&lt;br /&gt;So what is the likelihood of this happening?  Assume, for a moment, that the outcome of this company is binary (probably not too far from the truth).  If the probability that they meet this admittedly extremely lofty scenario is 10%.  That implies the expected value of the future market cap is $1.1bn, imply an expected annualized return of 27% from here... with some serious volatility.&lt;br /&gt;&lt;br /&gt;Any thoughts would be appreciated.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/115083054413245343/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/115083054413245343' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/115083054413245343'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/115083054413245343'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2006/06/wisdomtree-update-june-20th-2006.html' title='WisdomTree Update, June 20th 2006'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-114643173487722778</id><published>2006-04-30T13:40:00.000-07:00</published><updated>2006-04-30T17:29:37.353-07:00</updated><title type='text'>WisdomTree Update</title><content type='html'>I haven&#39;t posted in a while but I believe all that has been happening at WisdomTree merits a post.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://photos1.blogger.com/blogger/3390/1312/1600/WSDT.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 441px; height: 282px;&quot; src=&quot;http://photos1.blogger.com/blogger/3390/1312/320/WSDT.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;The bottom line&lt;/span&gt;&lt;br /&gt;WisdomTree is focusing on dividend ETF&#39;s, and has filed to release 20, 6 of which are domestic and the other 14 international, based on the premise that stocks which pay dividends regularly tend to outperform the market on a risk-adjusted basis.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;How this plays into things I&#39;ve said in the past on this blog&lt;/span&gt;&lt;br /&gt;I mentioned in &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/09/ixdp-is-now-wisdomtree-investments.html&quot;&gt;this post&lt;/a&gt; that that Siegel did a dividend study, but that PowerShares and others had released more than enough different divident  products.  My conclusion was that it was unlikely that they would pursue dividend ETF&#39;s. Ironic then that the lion&#39;s share  of their ETF&#39;s are indeed targeting dividend&#39;s-- quite a crowded space.&lt;br /&gt;&lt;br /&gt;The driving point of &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/09/wisdomtree-investments-september-9th.html&quot;&gt;this post&lt;/a&gt; was that they now have a nice, full bench of experienced professionals to smoothly bring them  from idea to implementation.  This post as well as the aformentioned one also contrasted WSDT to PowerShares in this regard.   PS simply didn&#39;t have the management pedigree, even though their product offering were solid enough. Interesting to see, then,  that PS was acquired by Amvescap.  To me this makes some sense, although Amvescap is an interesting acquirer-- Amvescap could  leverage its size to plug some of the holes that PS was unable to fill, while PS&#39;s core asset was the theoretical strength  of its products.  Through acquisition, Amvescap could use its marketing and distribution experience to add value to PS, adding  a respectable amount of incremental value to PS at less incremental cost to Amvescap, not even mentioning any possible cash  flow issues PS may or may not have been subject to which could have cheapened their bid.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;WSDT&#39;s rationale makes sense&lt;/span&gt;&lt;br /&gt;1) One can manipulate earnings, but one cannot manipulate cash.&lt;br /&gt;2) Cash dividends represent a real, direct and immediate return to shareholders.  All else equal, if free cash flow is retained, one must make an underlying assumption on the company&#39;s ability to reinvest at a reasonable risk-adjusted rate.  Companies that simply pay out that cash flow require no such incremental assumption.&lt;br /&gt;3) Investors on the whole seem to care less about dividend yields than they perhaps should. When was the last time the dividend yield of a stock was a key component of your investment thesis on that stock?&lt;br /&gt;4) Dividends by their very nature lack volatility.  They produce returns uniformly over time in a very steady fashion.  Contrast this to a portfolio whose return is generated entirely off capital gains and one can see why this may outperform most notably on a &lt;span style=&quot;font-style: italic;&quot;&gt;risk-adjusted&lt;/span&gt; basis.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Some thoughts&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;1) As a startup, it makes some sense that WSDT is focusing itself on one particular investment methodology&lt;/span&gt;.  From a marketing point of view, this probably makes for a more unified, clear PR message-- dividend stocks tend to outperform.  From a corporate identity point of view, it also makes WSDT more identifiable as a company-- &quot;Ah yeah, WisdomTree, the dividend ETF firm.&quot;  They may broaden themselves in the future, but proven performance in the dividend space probably won&#39;t confine them to the niche they are trying to carve for themselves as &quot;that dividend ETF firm&quot;.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;2) The pedigree of WSDT&#39;s management team is both a blessing and a curse from a buyout point of view&lt;/span&gt;.  Maybe someone would consider buying this company out, but with a team consisting of superstars like Siegel and Steihardt and ETF veterans like Morris, I would think their payoff profile would be better as a standalone entity, leveraging their identity in their marketing pitch.  In the event of a buyout, not only would not only be diluting their equity stake, they would also be diluting their ability to leverage their high profile identities.  How would Siegel and Steinhard stand out if they were representing a handful of a sea of ETF&#39;s for a company like Barclay&#39;s?  There is no wow value to that.  There is wow value to saying   superstars have started a firm focused on an underappreciated low-cost investment methodology, and have brought on board high profile veterans of the ETF space to make it happen.  Finally, Steinhardt has a 60+% stake in this company.  To acquire this company would require his approval.  Would Steinhardt sell out at a time like this, before any blood has been shed?  Granted, the return he&#39;s generated on this company has been enormous.  I just get the impression that he decided to get involved with WSDT because of its longer term prospects, so it would seem unlikely that he would sell out in the 3rd inning.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;3)&lt;/span&gt; &lt;span style=&quot;font-weight: bold;&quot;&gt;Why release 20 ETFs targeting dividends?&lt;/span&gt; Are each and every one of these dividend ETF&#39;s special, adding value to different investor groups with varying risk preferences?  In steady state, the shotgun approach is good at reaching investors across the spectrum, but until they reach steady state, they may be sacrificing the liquidity and perceived appeal of their flagship ETF.  I assume they have one or two flagships which they are expecting to be the most likely to perform exceedingly well, because that has historically been the case for other companies.  The others in that case may end up being a distraction.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;4) What of industry concentration?&lt;/span&gt;  Certain industries tend to yield more than others.  I wish I could read the papers they have put out as I believe at least one draft is public information, but I would assume that their pursuit of high dividend stocks has concentrated their portfolios on particular sectors.  I would imagine that this has big implications on the nature of the risk their portfolios take on relative to alternative portfolios which are more broadly exposed with respect to industry.  When they release information on their portfolio methodology one may want to take heed of how much industry risk they are exposed to.  Their portfolios may be more sensitive to external factors which impact certain industries as a whole, and may be more difficult to diversify off.  This may also make risk assessment more difficult, as broad industry trends are &quot;low resolution&quot; by nature.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;5) Where do they propose the alpha comes from? &lt;/span&gt;If the market were to become arbitrage free tomorrow, this portfolio shouldn&#39;t outperform other portfolios which are equally diversified with comparable cost efficiency.  Stocks with very high dividends deserve those high dividends because they don&#39;t feel their personal growth prospects merit reinvestment in the business, and stocks with low dividends deserve low dividends because they can bring about greater long term shareholder value through reinvestment.  In this paradigm, it may be of value to ask the question &quot;where is this outperformance coming from?&quot;  This question by itself could be the subject of a very lengthy&lt;br /&gt;research project which I am sure has at least been thought of by our friends at WSDT, in addition to the host of research papers that I haven&#39;t read. I would offer the following &#39;outperformance buckets&#39;, categories from which this outperformance may be flowing.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;    a) &lt;/span&gt;Investors tend to underappreciate dividends as a form of shareholder return relative to capital gains.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;    b) &lt;/span&gt;Companies that pay out higher dividends on average tend to be managed by executives who tend to grow shareholder value more than is recognized by the market as a whole.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;    c) &lt;/span&gt;Stocks that pay dividends that are too high (ie. companies that cannot in the long run support their high dividend) tend to be demanded more than is rational by &quot;yield hogs,&quot; generating more in the form of capital gains than is justified.&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;    d) &lt;/span&gt;Investors underappreciate the volatility benefits of a regular dividend payment relative to the allure of capital appreciation.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;From their Filings&lt;/span&gt;&lt;br /&gt;WSDT released a form N-1A, but not under WSDT&#39;s filings-- they filed under &quot;WisdomTree Trust&quot;-- on March 13th 2006.  They didn&#39;t give out any information regarding what their expense ratios will be, which is a bummer.   I also didn&#39;t see any information regarding the rebalancing methodology, which will to a large extent determine how much turnover to expect, which will obviously drive their expense ratio.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Managers&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Kurt Zyla and Todd Rose are managing the domestic funds.&lt;/li&gt;&lt;li&gt;Lloyd Buchanan and Robert Windsor are managing the international funds.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;They are given a relatively small amount of flexibility as their mandate is primarily to track underlying indices, with only a small (5%) amount of leeway.   Therefore I assume they will spend the bulk of their time making sure to mimic the output of a process which is computerized and automated in nature.  They are Bank of New York guys who I have never heard of.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;WSDT the stock-- worth it? &lt;/span&gt;&lt;br /&gt;It is impressive how much progress they&#39;ve made over a relatively short period of time. A market cap of $284mm implies $14mm in earnings at a 20 multiple. $14mm in earnings at an expense ratio of 70 bps under a seemingly reasonable cost structure implies perhaps $5B in assets under management, given that this business isn&#39;t labor or asset intensive, and seemingly its only real variable costs are transaction costs and the call option-type compensation structure which I am sure exists for the current executive team.  PowerShares had $3.5B according to the latest data I was able to find, and they were out for 3 years plus.   The market is currently around $400B, which implies WSDT would need to grab a small but respectable portion of the market.   I would expect the industry over this time to grow, feeding off weakness in the mutual fund space, which I assume to be around $6-7 trillion right now.  If even 5% of current mutual fund dollars were to be put into ETF funds through the addition of a retirement platform or an equivalent, that would nearly double the ETF market size.  There is a lot of room for the ETF space to grow.&lt;br /&gt;&lt;br /&gt;Also, the jury is still out on the ability of small startup ETF&#39;s ability to survive in the face of a market where the two largest managers account for 69% of the ETF market.   PowerShares effectively removed itself from the market by getting acquired.   I would assume the star power of the current management team and the BOD have a valid shot at replacing the marketing and distribution muscle it cannot hope to match its larger competitors on, but this nevertheless remains to be seen.&lt;br /&gt;&lt;br /&gt;Given the risk of the binary nature of this stock&#39;s eventual outcome given the fact I doubt they will get bought out (perhaps a bad assumption), I would consider this if it could be a triple in three years.  To be a winner it would have to do something like $15B in aggregate as a company in three years.  Can a fund which isn&#39;t attempting to track the S&amp;amp;P or some other broader index attract this kind of flow in that period of time?  It is possible, but I am not sure how probable that is.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/114643173487722778/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/114643173487722778' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/114643173487722778'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/114643173487722778'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2006/04/wisdomtree-update.html' title='WisdomTree Update'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113859721651636703</id><published>2006-01-29T20:56:00.000-08:00</published><updated>2006-01-29T21:00:16.536-08:00</updated><title type='text'>Proactive Forecasting</title><content type='html'>&lt;p&gt;Joel Greenblatt in both ‘You Can Be a Stock Market Genius’ and ‘The Little Book that Beats the Market’ follows a similar investment generation methodology.  He finds baskets of companies which, as a group, tend to outperform the market.  He then digs into those baskets with fundamental analysis to juice the returns further, with the knowledge that even if here to add little or no value in the fundamental analysis process, that he still has positive expected returns to back him up because of the risk/reward properties of the baskets being looked at.  His “baskets” included spinoffs, partial spinoffs, stock recapitalizations, merger securities, and stocks which are cheap and good, where cheapness is defined by earnings yield and goodness is defined by return on invested capital. For one reason or another, all these groups taken as a whole outperform, so even if he were to pick stocks at random from these lists, under the right set of conditions, he would still outperform.&lt;br /&gt;&lt;br /&gt;So the way I see it, his methodology is able to take advantage of the benefits of both quantitative analysis and fundamental analysis. &lt;br /&gt;&lt;br /&gt;Quantitative analysis is very good at using large amounts of historical data to back-test things which we may intuitively believe to be true.  In this fashion, it can be very helpful as a check, and it can help us form a more reasonable expectation of the sort of returns we can expect from a particular situation over time. &lt;br /&gt;&lt;br /&gt;Fundamental analysis is less useful for back-testing because proper analysis requires so much time, but it can reach a depth of understanding which just isn’t possible with quantitative analysis. &lt;br /&gt;&lt;br /&gt;Greenblatt (and Pzena), by leveraging both, haven’t done too badly. &lt;br /&gt;&lt;br /&gt;The goal of their forecasting is to find pockets of companies which tend to outperform the market.  The thing which should be noted, though, is the fact that all their predictor variables are pre-visible—they are all things which are known with complete certainty at the time of investment.  For example with the magic formula, the ttm ROIC and earnings yield are by definition already known—there is no uncertainty that those numbers aren’t true.&lt;br /&gt;&lt;br /&gt;If the only goal of forecasting is to find groups of companies which tend to outperform, why should we constrain our predictor variables like this though?  I would introduce the notion of “cost of error in my predictor variables” as well as “predictability of my predictor variables.”  In this context, I would claim the following:&lt;br /&gt;&lt;br /&gt;{Usefulness of an input variable} = f({ability to know input variable}, {ability of input variable to predict output variable}, {cost of error if input variable’s actual value deviates from expected value}).&lt;br /&gt;&lt;br /&gt;What typical regressions assume, in this paradigm, is that the cost of not knowing what our input variable’s actual value is is infinite.  This forces us to make forecasts entirely on the basis of past data.  I could see some value though in including input variables which are forward looking—I might not know what their value will be exactly, but if I know that I can predict those input variables with a good level of confidence (through a lot of due diligence, for example), then those input variables could be a lot more useful than input variables which strictly look to the past.  While I&#39;m on the subject of typical regressions, I&#39;d also like to add that most people tend to get more than a little bit lazy in their data collection.  Why should I constrain myself to variables that I can easily get, or that I can easily quantify?  This misses out on the whole notion of cost.  There are a lot of &quot;fuzzy&quot; variables that could provide wonderful insights to any quant model, if only someone would just go and do a little more digging-- be a little more subjective-- and stop being so damn traditional for once.&lt;br /&gt;&lt;br /&gt;Anyways I digress.  &lt;/p&gt;&lt;p&gt;If I find an input variable which I think can predict with a good level of confidence future returns, but I’m not 100% sure what the input variable’s value will be (for example, next year’s earnings), then {ability to know input variable} decreases but {ability of input variable to predict returns} increases.  As long as the cost of deviation is low, I could very well favor this input relative to historical inputs.&lt;br /&gt;&lt;br /&gt;This takes Greenblatt’s methodology one step further and completes it.  In this context I would run screens like the following: find all companies experiencing massive EBIT growth relative to their current EV/EBIT, and which subsequently are able to maintain EBIT growth over the next four quarters which is at least twice the level of the EV/EBIT. See how these companies have performed over the past 20 years.  Analyze the distribution for patterns—are there periods of time where this sort of methodology fell out of favor?  Do the losers exhibit a certain quality in a non-random way?  If these companies dramatically outperform the overall market, then I know that if I were to screen for companies with massive EBIT growth relative to EV/EBIT, and I was able to predict with a high degree of confidence that that EBIT growth would hold up for at least a year for some subset of this group, I would probably consider constructing a trading strategy around this.&lt;br /&gt;&lt;br /&gt;This again uses quant in addition to fundamental analysis, but brings them together much more tightly.  This could be useful for the idea generation process.  It requires a high level of discipline in the stock picking process, in a similar fashion to Greenblatt and Pzena. It might make a few people some bucks.&lt;br /&gt;-Dan&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113859721651636703/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113859721651636703' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113859721651636703'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113859721651636703'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2006/01/proactive-forecasting.html' title='Proactive Forecasting'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113576995280362013</id><published>2005-12-28T03:39:00.000-08:00</published><updated>2005-12-28T03:46:04.936-08:00</updated><title type='text'>Randomness Kills Simplicity, But Hey, That&#39;s Reality</title><content type='html'>“Things Should Be Made As Simple As Possible, But Not Any Simpler”&lt;br /&gt;&lt;br /&gt;I was actually feeling quite content as I boarded a bus to New York City today regarding some of the concluding thoughts in the last post about schema theory, and the ebb and flow from complexity to simplicity.  As usual of course I brought with me my pseudo-bible, “Fooled By Randomness,” to re-read it... again.  As has always been the case, it definitely put me in my place, so I thought I’d temper some of the optimism of the last post with a dose of what Taleb knows best—randomness.  Thoughts are again welcome.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Inductive Reasoning &lt;/strong&gt;&lt;br /&gt;&lt;strong&gt;Schema Theory &lt;/strong&gt;and &lt;strong&gt;inductive reasoning &lt;/strong&gt;have a lot in common.  Inductive reasoning involves observing empirical data and searching for patterns of behavior which form the basis for hypotheses about the nature of reality.  In other words, it wades through large amounts of data and attempts to make sense of it all through causal links and unifying properties.  This is somewhat similar to how a financial analyst gathers a lot of information which at the start seems independent and distinct, but which over time (hopefully) comes together under some line of logic to form a complete understanding of the company and the nature of its business and dynamics.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Taleb’s Issue With Inductive Reasoning&lt;/strong&gt;&lt;br /&gt;Taleb took more than a few shots at inductive reasoning, and rightfully so.  Inductive reasoning’s conclusions are very sensitive to the properties of the process whose observations we analyze.  If some process we see is very well behaved, for example is normally distributed, then the information gain we receive with each additional observation is a quantifiable amount which we know a priori.  But how can we know in reality with no ability to see the future that a process will continue to behave in a normal fashion going forward?  And when the distribution underlying the process becomes increasingly non-normal, we start to run into serious information gain problems. &lt;br /&gt;&lt;br /&gt;Taleb characterized this as playing Russian roulette using a gun with 1,000 chambers.  If I were to play this Russian roulette with no knowledge of the number of chambers or the number of bullets in the gun, and it just so happened that after 500 trials I was still standing, I would probably start believing there were no bullets in the gun in the first place—induction might lead to a conclusion like this given my knowledge of guns and the number of trials, but this would obviously be wrong. &lt;br /&gt;&lt;br /&gt;The main point I’d like to drive home then is the fact that induction naturally and unavoidably simplifies the world.  Drawing positive conclusions from an incomplete data set is to some extent what we have to do if we want to do anything, and yet it naturally leads to error.  Knowing that such error is always possible and will probably lead to mis-evaluation requires an acceptance and appreciation of randomness.  And randomness is the bane of the simplification process I mentioned earlier.  The company no longer occupies one mental slot in my brain.  All those facts relating to the company which cannot be logically connected to my paradigm of “the company” must sit uncomfortably in other mental slots.  It’s inefficient, but it’s also how things are, so what can you do but accept that.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Conclusion&lt;/span&gt;&lt;br /&gt;So when Einstein said “things should be made as simple as possible, but not any simpler,” what I think he’s acknowledging is the fact that there is a natural limit to the amount of simplification which can occur.  Because of randomness, many things cannot and should not be connected if ones goal is to obtain a rational view of reality for the purposes of forecasting.&lt;br /&gt;&lt;br /&gt;It’s a bit sad to believe that we can only truly know that which is false, and can never really know that which is true (Popper).  We can only make our best guesses, over and over again, and hope that through personal risk management, the randomness which plagues the decisions we make based on those guesses aren’t so correlated that we suffer terribly.  This was Taleb&#39;s conclusion, to the best of my understanding.  It&#39;s not as if he ceased to make decisions.  He used statistical inference for all that it was worth to make investment decisions, but then made sure to separate that process from his weighting methodology to tailor his risk profile to his liking.&lt;br /&gt;&lt;br /&gt;Not too happy a blog post, sorry guys.&lt;br /&gt;-Danny</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113576995280362013/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113576995280362013' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113576995280362013'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113576995280362013'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/12/randomness-kills-simplicity-but-hey.html' title='Randomness Kills Simplicity, But Hey, That&#39;s Reality'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113550435933126816</id><published>2005-12-25T01:52:00.000-08:00</published><updated>2005-12-28T00:06:13.886-08:00</updated><title type='text'>Taking Another Look at Arnott (Why Not?)</title><content type='html'>As long-time readers know, I am interested in indexation. I have a few thoughts on Arnott’s Fundamental Indexation. Before diving into the improvements though, I thought it might be of value to take a closer look at the theoretical underpinnings of his rationale, which I break up into a few parts.&lt;br /&gt;&lt;br /&gt;I&#39;d break things down to two claims. One claim is that the S&amp;P is inefficient because of cap weighting and the other is that Fundamental Indexing can do a better job. They seem to be theoretically somewhat orthogonal so this could help flesh things out. In the interim, I throw out some implications and a test I’d be interested to see.&lt;br /&gt;&lt;br /&gt;As usual if anyone has any feedback I would be highly interested to hear it. This is one of the more technical posts as a word of warning.&lt;strong&gt;&lt;br /&gt;&lt;br /&gt;The Inefficiency Claim&lt;/strong&gt;&lt;br /&gt;Inefficiency is pretty clear. As I see it, it&#39;s due to the fact that deviations from intrinsic value, net-net, &lt;strong&gt;tend to have zero expected value in terms of returns and mean revert&lt;/strong&gt;.&lt;br /&gt;&lt;br /&gt;Assume that all stocks have some deviation which is due to intrinsic value and another due to idiosyncratic noise. Hypothetically if I know a priori the future evolution of the changes in intrinsic value of all stocks, and I were to net all stock prices by my perfect estimates of intrinsic value, I would be left with a set of residuals whose &lt;strong&gt;returns &lt;/strong&gt;should have zero mean and a mean reverting tendency. If deviations are comparable in terms of returns and not dollar value, then small caps and large caps are equally likely to deviate by, say, 1% from intrinsic. In reality this might not be exactly the case but it is within a reasonable level I would expect. However the dollar value impact of the deviation will be much larger for the large cap relative to the small cap. On a period by period basis then, if I were to invest as if I were the S&amp;amp;P, I would systematically emphasize fluctuations of large cap stocks more than small cap stocks-- and rightly so if the variation were due to intrinsic value shifts. But if one were to run the simulation mentioned above, one would see that if all stocks&#39; prices were initially set to intrinsic value, the idiosyncratic variations force the market to over-emphasize the fluctuations of the stocks with the positive idiosyncratic residuals relative to a market which fluctuates entirely off of changes in intrinsic value. The mean reverting property of the idiosyncratic noise is then the killer, as it probabilistically speaking puts some drag on the stocks with the over-emphasis. Thus, the problem.&lt;br /&gt;&lt;br /&gt;Is there a flaw in that logic?&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;The Implications of S&amp;P Inefficiency&lt;/strong&gt;&lt;br /&gt;If the S&amp;amp;P is indeed inefficient, there are quite a few consequences. &quot;The market&quot; is supposed to be mean variance efficient. We use it all the time in our finance courses as the basis behind the market risk premium. We use it to get our hands around the tradeoff between risk and expected return. All of this would basically be wrong. If the S&amp;P is indeed inefficient, we might have to raise the hurdle rate of our projects by a couple hundred basis points.&lt;br /&gt;&lt;br /&gt;Of course, it was wrong beforehand too. To be technical, the stock market is a pretty poor proxy for the &lt;em&gt;real &lt;/em&gt;market—the whole economy, with a lot of very particular nuances (Zack, I’m sure you explain this 10x better than I can). This just means that even when representing the stock market, the S&amp;amp;P does a poor job.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;The Improvement Claim&lt;/strong&gt;&lt;br /&gt;The second claim is that Fundamental Indexing can do better.&lt;br /&gt;&lt;br /&gt;I can&#39;t be as confident but I guess the rationale from my point of view goes something along these lines. All stocks in the S&amp;P are supposed to be weighted by their intrinsic values. But if one makes the assumption that stocks deviate from intrinsic, the argument above implies cap weighting, although it is a great proxy for company size, has problems. Why not try out other things which are proxies for company size which might not have the bias that cap weighting has? Income, for example, has a 95% return correlation with the S&amp;amp;P, almost as much capacity as the S&amp;P, also tends to favor very large companies, and doesn&#39;t create marked deviant industry allocation. It doesn&#39;t take on much more small stock risk from Fama-French, and rebalancing schemes can bring turnover down to the level of the S&amp;amp;P itself. It definitely has more F-F &quot;value&quot; to it but it&#39;s not taking on more risk in terms of liquidity, interest rate regime or bull/bear market cycle. It&#39;s just trying to proxy for market size without bias, albeit with lower data resolution.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Tempering Expectations; Possible Improvement&lt;/strong&gt;&lt;br /&gt;While the above rationale is intuitively appealing, its improvement relative to the S&amp;P is a function of the degree of mean reversion there is to the idiosyncratic noise. If “irrational” price movements take years to correct themselves, then attempts to trade this noise, while expected value positive, could take so long and suffer large enough drawdown that it could very well be unfeasible to trade on.&lt;br /&gt;&lt;br /&gt;That being said, Arnott himself showed that historically, a fundamentally indexed portfolio outperforms by approximately 200 basis points—this is a sizable margin considering the large back-testing period he considered.&lt;br /&gt;&lt;br /&gt;To take a closer look at the inefficiency, one can make a direct link between a fundamental metric and market cap. Take free cash flow (‘FCF’), for example, as our fundamental metric. Market cap (‘MC’) is simply FCF multiplied by MC/FCF, the FCF multiple. Looked at from this angle, the, the inefficiency implies mean reversion in the multiple-- MC/FCF for example. But he never does out the statistics from what I could see in his paper-- he simply turned to other stats which implied mean reversion somewhere. So I&#39;m thinking he could be missing some alpha which could be gotten with a little additional complexity. If all companies are reduced to two numbers-- FCF and P/FCF for example-- then weighting entirely on FCF implies independence between FCF and the multiple on forward returns, right? But I would think that a company which does 50M in FCF on a 20 multiple has a different payoff profile than a similar company which does 50M on a 3 multiple. The multiple implies something about the quality of the underlying earnings, and quality isn’t picked up by FCF on a standalone basis. While Arnott&#39;s methodology would definitely reallocate towards the lower multiple company relative to the higher multiple one, it might still be giving too little credit to the 20 multiple, because the market seems to be saying there is something about that FCF which is more valuable to investors.&lt;br /&gt;&lt;br /&gt;Has anyone seen a test done which buckets the market by FCF, then buckets again by multiple, creating a matrix of subgroupings, then populates that matrix with 1 year forward returns on a year by year basis? Collection of say 50 years of data would create a 3D matrix. With this one could test the claim that FCF and P/FCF are indeed independent of one another and see if there is any additional insight which could be gained.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Closing Thought (Thanks Mike!)-- Schema Theory&lt;br /&gt;&lt;/span&gt;Mike over at TaylorTree posted a kind reference to a couple of my prior posts in &lt;a href=&quot;http://taylortree.com/2005/12/food-for-thought.html&quot;&gt;one his last entries&lt;/a&gt;. I agree with him completely when he references the tradeoff between simplicity and complexity. I just thought I&#39;d chip in with a few thoughts which come from the intriguing field of cognitive development... and my favorite theory of how we acquire knowledge, &lt;span style=&quot;font-weight: bold;&quot;&gt;Schema Theory&lt;/span&gt;.&lt;br /&gt;&lt;br /&gt;Under schema theory, knowledge takes the form of a multitude of &#39;schema&#39;, which, broadly speaking, are mental representations of what all instances of something have in common. As an example, my &quot;house&quot; schema represents what is common to all houses that I&#39;ve been in. A house has parts, it&#39;s made of many things, it can be used for a variety of purposes, ... the list goes on. This is important because when I look at 1,000 houses, they aren&#39;t all completely different from eachother-- they have broad similarities which I have mental categories for with which I can compare the houses.&lt;br /&gt;&lt;br /&gt;The transition from complex to simple and back to complex might at least partially be explained by how schema theory explains our learning process. Schema decompose complexity through &lt;span style=&quot;font-weight: bold;&quot;&gt;categorization&lt;/span&gt; and &lt;span style=&quot;font-weight: bold;&quot;&gt;abstraction&lt;/span&gt;. I&#39;m not big on terms so I thought an example might make things a little more clear.&lt;br /&gt;&lt;br /&gt;When dealing with new experiences, we have a tendency to treat them as new and different from what we&#39;ve experienced in the past. For example, if someone were to throw me a ticker and have me look at its business, I would, at the onset, treat all new information I take in regarding the company as new. I would probably begin by gathering general information about the company-- business line, industry, margins, growth, etc. To a large extent, those data points I pick up, at least at the start, don&#39;t really have a place. They are just distinct facts. From a cognitive utilization point of view, this is really, really inefficient! I&#39;m being forced to use all of the slots I&#39;ve got up there in my brain just to digest all these little random tidbits of information!&lt;br /&gt;&lt;br /&gt;What happens over time though is that linkages form. The high margins of the company make sense because they&#39;ve been able to grow sales without any corresponding growth in assets, so much of the sales growth is simply going straight through to the bottom line. Assets aren&#39;t growing because their business does a remarkable job of flexing capacity. Their margins are staying up because of cost-related nuances. The magnitude of the sales growth is explainable by the geography the company resides in and the customers it does business with. All the facts-- the qualitative concepts and the hard numbers-- naturally fall into place, and instead of thinking of the company as 10,000 distinct data points all independent of one another (complexity), it is instead &quot;the company&quot; (total simplicity). All facts are entagled in an fact web which sticks so tightly to itself that they really are all one idea in your head. It goes from using all of our cognitive slots to one of them. And it does so by characterizing the company through the same analytical categories which were used to analyze the hundreds of other companies that have been looked at&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;/span&gt;.&lt;br /&gt;&lt;br /&gt;In this context it kind of makes sense that things naturally ebb and flow from simple to complex. We are constantly trying to expand our intellectual borders, learning new tools, new ways of looking at things... but at the same time we are naturally also doing some heavy duty simplification. Making things complicated and simple are the pillars of cognitive development, and something which can be optimized on.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113550435933126816/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113550435933126816' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113550435933126816'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113550435933126816'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/12/taking-another-look-at-arnott-why-not.html' title='Taking Another Look at Arnott (Why Not?)'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113497740106380557</id><published>2005-12-18T22:45:00.000-08:00</published><updated>2005-12-19T00:08:31.063-08:00</updated><title type='text'>Responding to a comment; model building thoughts</title><content type='html'>I&#39;m not quite sure how but a &lt;a href=&quot;http://www.blogger.com/comment.g?blogID=14484258&amp;postID=112909962899664971&quot;&gt;comment&lt;/a&gt; by one of my readers somehow evaded me until now.  I thought it might be of value to post some thoughts in response.&lt;br /&gt;&lt;br /&gt;I would first of all emphasize how extremely basic that article is, and some of the major caveats which might be of value to consider.  I&#39;ll walk through it a little.&lt;br /&gt;&lt;br /&gt;&quot;Step 1. Decide on the time frame and the general strategy of the investment. This step is very important because it will dictate the type of stocks you buy.&quot;&lt;br /&gt;&lt;br /&gt;While this sounds stupidly simple, it&#39;s surprising how often it isn&#39;t adhered to, directly or indirectly.  As investors, we are subject to a wide range of psychological biases which cloud our ability to make rational investment decisions.  Quite a few of them revolve around irrational response to unexpected events... which can have pretty dramatic repercussions on all aspects of our investment making process, including time horizon.  I think a lot of this can be dealt with by thinking a little more deeply about the assumptions underlying the investments we make, which I wrote about a while back in &lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/07/assumption-management.html&quot;&gt;Assumptions Management&lt;/a&gt;. I can&#39;t stress enough how important I think it is to come to grips with the assumptions we are making when we invest in the companies we invest in-- if I fix my time horizon at six months, does that imply I&#39;m willing to stomach any and all price movement in between?  Why?  Might it be of value to consider risk re-evaluation points so that you can adapt to the changing underlying fundamentals of the companies you&#39;ve invested in?  If so, what is a logical structure for those re-evaluation points-- a function of time?  A function of the influx of news?  Quarterly, after the release of the latest K or Q?  Could one also deal with adaptive conditions by making shorter term forecasts so that, should negative residuals appear, you could go in and figure out why reality deviated from expectation? &lt;br /&gt;&lt;br /&gt;More fundamentally, why will my strategy do any better, risk-adjusted, than the market in the long run?  If I know that it can&#39;t, then why do I believe that it can outperform over the short run, and how do I know when to switch out because my system has stopped working?  If I can&#39;t answer all these questions with some degree of confidence, I think one is probably making an uninformed investment decision.&lt;br /&gt;&lt;br /&gt;&quot;If you decide to be a short term investor, you would like to adhere to one of the following strategies:...&quot;&lt;br /&gt;&lt;br /&gt;This is somewhat silly.  First of all &quot;momentum trading&quot; and &quot;contrarian strategy&quot; are two sides of the same coin.  The author is referring to autocorrelation trading, or the identification of companies whose price processes tend to be serially autocorrelated with past price movement in some form under a certain set of initial conditions.  Yes, autocorrelation can have a positive coefficient (trend following) or a negative one (mean reverting, aka contrarian).  Great.&lt;br /&gt;&lt;br /&gt;While a lot of short term trading is autocorrelation based, this isn&#39;t the case for all short term trading, unless one greatly expands ones definition of &quot;autocorrelation&quot; to include a lot more than past price history.  I know very little, but I can assure you that these are two of many, many other forms of short term trading.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&quot;Step 2. Conduct researches that give you a selection of stocks that is consistent to your investment time frame and strategy. There are numerous stock screeners on the web that can help you find stocks according to your needs.&quot;&lt;br /&gt;&lt;br /&gt;I am surprised that steps 1 and 2 have made no mention of historical backtesting of some form or another.  Again, I think this comes back to two of the pillars of investing IMHO-- risk exposure and investment assumptions.   Different investment methodologies expose us to different forms of risk.  Do we know exactly what risks we are exposing ourselves to, and is there a reason why we want to be exposed to them?  Even if I have run all the statistical tests in the world and all seemingly indicate that I am looking at a sustainable chunk of alpha, is there no way in some state of the world for that relationship to not hold in the future?&lt;br /&gt;&lt;br /&gt;Let&#39;s say I&#39;m looking at Greenblatt&#39;s magic formula.  Its generated some great returns on a risk adjusted basis over the past couple decades.  As an individual investor looking to invest my retirement savings for the next 20 years, what sort of things should be running through my head?  One possible concern is that given the increased exposure this strategy will get, a large following of individuals will pile on.  ETF&#39;s will be created which will do the same.  If the investment management business were to universally believe that this will generate alpha relative to straight investment in the S&amp;amp;P, then would the marginal buyer, the guy who gets in after everyone else has bought, expect to outperform as well?  One of the sad things about many if not all short term trading strategies is that they are only valuable if no one else knows about them and you are able to trade without creating any footsteps.&lt;br /&gt;&lt;br /&gt;But there are more concerns.  Let&#39;s say Greenblatt&#39;s formula became extremely popular.  At some point, would it be unheard of for companies to tailor their financials to attain better ranking, even if this didn&#39;t accurately represent underlying financial reality?  While this sounds like a silly concern, I can guarantee you that hordes of companies are doing exactly this in some way shape or form-- window dressing, tailored compensation schemes, ...&lt;br /&gt;&lt;br /&gt;&quot;Step 3. Once you have a list of stocks to buy, you would need to diversify them in a way that gives the greatest reward/risk ratio (The &lt;a href=&quot;http://www.cisiova.com/blogs/optimalportfolio/2005/10/sharpe-ratio.html&quot; rel=&quot;nofollow&quot;&gt;Sharpe Ratio&lt;/a&gt;). One way to do this is conduct a Markowitz analysis for your portfolio. The analysis is from the &lt;a href=&quot;http://www.cisiova.com/blogs/optimalportfolio/2005/10/modern-portfolio-theory-mpt.html&quot; rel=&quot;nofollow&quot;&gt;Modern Portfolio Theory&lt;/a&gt; and will give you the proportions of money you should allocate to each stock. This step is crucial because diversification is one of the free-lunches in the investment world.&quot;&lt;br /&gt;&lt;br /&gt;This is a whole other topic of its own and is typically used by quants.  Again, we are looking at risk... except now it&#39;s portfolio risk we&#39;re dealing with.  We all deal with portfolio management to varying degrees.  The only point I&#39;d make about Markowitz has to do with stability.  Markowitz optimality is only as good as the assumptions underlying that optimality.  Just because a portfolio historically had a certain risk/reward profile doesn&#39;t mean that it will continue to have that into the forseeable future.  Thus stability becomes important as a measure of just how realiable the past data is.&lt;br /&gt;&lt;br /&gt;One insight about Markowitz portfolios for example is that historical risk happens to be a better indicator of future risk than historical return and future return.  Knowing that, I would heavily discount a portfolio whose performance as defined by some measure of risk adjusted return like Sharpe if it is driven by return.  I would also then perhaps choose portfolios which as a pre-condition jive with my risk tolerance, because I know I can trust historical risk to some degree, and then spend the bulk of my time assessing the expected return of the stocks in my portfolio.&lt;br /&gt;&lt;br /&gt;The most true line in that article, IMHO, is the one below:&lt;br /&gt;&lt;br /&gt;&quot;Stock picking is a very complicated process.&quot;&lt;br /&gt;&lt;br /&gt;Hope this helps.&lt;br /&gt;-Dan</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113497740106380557/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113497740106380557' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113497740106380557'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113497740106380557'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/12/responding-to-comment-model-building.html' title='Responding to a comment; model building thoughts'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113493891137598295</id><published>2005-12-18T12:48:00.000-08:00</published><updated>2005-12-18T12:48:31.383-08:00</updated><title type='text'>Response to Gavekal&#39;s Indexation Article</title><content type='html'>Response to “&lt;strong&gt;How do we invest in this brave new world? Is indexing the answer?”&lt;/strong&gt;by Charles and Louis-Vincent Gave and Anatole Kaletsky&lt;br/&gt;&lt;br/&gt;Gavekal&#39;s article was quite thought provoking and very interesting, and revolved around a few central tenets.&amp;nbsp;&amp;nbsp;One tenet is that the existence and rise of indexation will lead to more inefficiency in the market rather than less.&amp;nbsp;&amp;nbsp;There were a few reasons.&amp;nbsp;&amp;nbsp;One reason was that the increased importance of the index made the index the reference point for risk.&amp;nbsp;&amp;nbsp;Another reason was that due to its being capitalization weighted, the purchase of the index led ones portfolio to be systematically overweighting the stocks which probabilistically speaking are the most overvalued, and vice versa.&amp;nbsp;&amp;nbsp;It’s a relatively complicated article and there’s no way I can do it justice in one paragraph, so I recommend checking it out.&amp;nbsp;&amp;nbsp;It is available to the public for free over &lt;a href=&quot;http://www.investorsinsight.com/otb_va.aspx?EditionID=233&quot;&gt;here&lt;/a&gt;. &lt;br/&gt;&lt;br/&gt;I just had two questions. &lt;br/&gt;&lt;br/&gt;It seems an underlying assumption made in the paper is that “indexation” is and will be, primarily, investment in something which tracks a broader market segment like the S&amp;P.&amp;nbsp;&amp;nbsp;However might this be changing, albeit slowly, as more investors begin to see the investment appeal of ‘alternative’ indices?&amp;nbsp;&amp;nbsp;Arnott’s indexation methodology is being implemented at PowerShares and Allianz.&amp;nbsp;&amp;nbsp;Rydex is actively pursuing a number of innovative strategies.&amp;nbsp;&amp;nbsp;Its S&amp;P equal weight has a tinge of Arnott in it and has outperformed materially for a quite while, arguably not only because of its relative overweighting in small caps but also perhaps because of some degree of nuances due to its rebalancing. WisdomTree is supposedly coming to market with other innovative products.&amp;nbsp;&amp;nbsp;Greenblatt at the Conference last week made a very compelling case for a strategy which could very easily be converted into an ETF product, and I would be highly surprised if it isn’t.&amp;nbsp;&amp;nbsp;All of these products can be invested in by your typical individual investor.&amp;nbsp;&amp;nbsp;I agree that to some degree these are untested concepts, but they are interesting trend in the ETF world, and one which might have implications many years down the road for those investors who don’t have the time to be effective price choosers in the market mechanism.&amp;nbsp;&amp;nbsp;&lt;br/&gt;&lt;br/&gt;Secondly, at that point it could be of value to take a second look at how investment professionals add value to the market. They are compensated for efficiently pricing stocks so that capital is more properly allocated to those who need and deserve it.&amp;nbsp;&amp;nbsp;If alternative indexes like Greenblatt’s become very popular, it’s as if a sliver of alpha has left the system for a mere handful of basis points.&amp;nbsp;&amp;nbsp;It would put mutual funds in an awkward position because the relative importance of their benchmark is diminishing, and yet they are forced to remain chained to its fluctuations.&amp;nbsp;&amp;nbsp;This could have a crippling effect on them and their performance.&amp;nbsp;&amp;nbsp;And hedge funds would then have to find increasingly innovative ways to generate the alpha their investors are looking for in a seemingly shrunken opportunity set.&amp;nbsp;&amp;nbsp;Under this paradigm, money would begin to flow, probably slowly at the start, out of mutual funds and the market would evolve into how I think it probably should be—ETFs and professional money managers (hedge funds), focused on absolute returns and on products all up and down the risk spectrum in all shapes and sizes to accommodate the risk preferences and hedging needs to better serve their investors.&amp;nbsp;&amp;nbsp;While mutual funds have a leg up organizationally and operationally because of their firm entrenchment in various retirement programs, I am optimistic that market efficiency will overcome this if an ETF which charges a mere handful of basis points can do all of what the mutual fund does but at a far cheaper price.&amp;nbsp;&amp;nbsp;We can somewhat see it coming already, with the rise of ETF’s which are much more friendly to employees at companies—ETF’s which are actively trying to gain ground in the various channels which have traditionally been dominated by mutual funds.&amp;nbsp;&amp;nbsp;It is in their best interests, and rightly so, to push as hard as they can into these channels to steal market share.&amp;nbsp;&amp;nbsp;Given their structure, I believe they have a good shot at succeeding.&amp;nbsp;&amp;nbsp; &lt;br/&gt;&amp;nbsp;&amp;nbsp;&lt;br/&gt;Once again great article, I was just wondering what your thoughts are on these questions and thought they might be interesting. &lt;br/&gt;&lt;br/&gt;&lt;br/&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113493891137598295/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113493891137598295' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113493891137598295'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113493891137598295'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/12/response-to-gavekals-indexation.html' title='Response to Gavekal&#39;s Indexation Article'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113493770986582825</id><published>2005-12-18T12:28:00.000-08:00</published><updated>2005-12-18T12:30:34.880-08:00</updated><title type='text'>Exclusive vs Inclusive; Thoughts on Model Building</title><content type='html'>This is a work in progress. I actually disagree with some of what I say below.  I think working on a trading desk, trying to piece things together as a trader would, is what pushes in-house models to be more complex than less.  Humans are great at capturing some forms of weird idiosyncrasy.  That naturally causes the models they would &#39;like&#39; to build in a very complicated direction. &lt;br /&gt;That being said, there is a world of difference between models which attempt to reach very specific conclusions and then expand, and models which start by making very sweeping, broad statements and over time becoming increasingly granular.  Perhaps the market in question and the granularity of your data determing to some extent what the &quot;optimal&quot; problem solving paradigm is.&lt;br /&gt;&lt;strong&gt;&lt;br /&gt;Thoughts on Quantitative Trading&lt;/strong&gt;&lt;br /&gt;Being able to identify homogeneity in the financial markets seems to be a driving concept when doing quant trading.  Classification and homogeneity are two sides of the same coin-- if all securities in the financial markets were unique, all being driven by uncorrelated processes, it seems that you&#39;re shit out of luck.  There&#39;s no way to build a trading system which makes buy and sell recommendations based on cusip (well, perhaps... there is actually some homogeneity here too (image placeholder)); we&#39;re in business when we can find ways to classify securities in some way.  A useful classification is able to identify things which tend to trade the same way-- and of course when two things trade the same way, we quants would call a proper long-short of the two a stationary, mean reverting process (this, by the way, is the essence behind cointegration-optimal hedging and indexing). &lt;br /&gt;&lt;br /&gt;So let&#39;s assume for a moment that the goal is identifying homogeneity in some way, shape or form in the financial markets.  Where the hell do you begin.  I believe you begin by making the decision of whether or not to adopt an &lt;em&gt;inclusive &lt;/em&gt;or &lt;em&gt;exclusive &lt;/em&gt;paradigm.&lt;br /&gt;&lt;br /&gt;The inclusive paradigm, which seems to be the most popular (perhaps because it relies on the least granular information?), is to identify very broad trends in the market.  For example, there may be tens of thousands of stocks trading right now, but if I were to bucket them into capitalization-based deciles, trends begin to form when looking at one-year-forward expected returns.  In other words, broad-based homogeneity begins to surface.  At that point, we may attempt to identify what we consider to be &quot;the next best classifier,&quot; which would then split the deciles into subdeciles, each of which is then even more homogeneous.  I bet a lot of people have made good money adopting this paradigm, and to be honest, it&#39;s the paradigm I personally have had the most experience with up until this point.&lt;br /&gt;&lt;br /&gt;But inclusive classification has many downsides which aren&#39;t entirely obvious.  First of all, the sometimes extreme level of broadness makes it all the more difficult to identify what classifier is indeed the &#39;best&#39;.  Second of all, inclusive classifications tend to carry with them longer time horizons, which aren&#39;t necessarily able to be traded on by desks or funds which need strong enough mean reversion to ensure them a decent probability of success over shorter time intervals.  That being said, there are some serious benefits to a proper long-short-based inclusive classification trading strategy.  Most notably, as long s one is dealing with securities that have less dimensionality—less complexity—than others, the value of this paradigm IMHO improves dramatically.  The reason is because there is so little one needs to then control for.  It makes some sense, then, why this seems to be the sort of paradigm from which most ETFs have been created.  They strip away idiosyncratic risk as much as possible, they can carry with them lower transactions costs, and retain the ability to expose you broadly to the form of risk you’d like to be exposed to.&lt;br /&gt;&lt;br /&gt;But the same isn&#39;t really true of other forms of securities.  Most securities, in fact, are extremely complex when you think about it.  Take municipal bonds, for example.  While it may be conceivable to construct a broad trading strategy around municipals, a ton of polluting factors makes things more difficult.  First of all there is the issue of liquidity (this actually exists with equities as well).  Two securities may look the same and be structured in the same fashion, but if one happens to be less liquid than the other, the more liquid security in an efficient market should demand some sort of a premium.  This would then require quantifying the bid ask spread.  But that is a classification nightmare in and of itself.  Next take the fact that bonds can be issued in any number of states, have all sorts of varying call provisions, bond types (ie. revenue, GO, double barrel, water and sewer, credit rating, insurance, ..., ).  It&#39;s a fixed income instrument, but it has quite a few idiosyncratic elements.  Broad categorizations inevitably fall into the trap of being too general. &lt;br /&gt;&lt;br /&gt;So rather than pursue the inclusive paradigm, the paradigm then becomes that of exclusion.  That is, find on some truly granular level those securities which tend to be homogeneous in some fashion.  Then (as long as your dataset is granular enough), peel off the layers of idiosyncracy from your generic set to other sets, quantifying the various credit spreads which should be applied relative to your reference rate (in the case of municipals, the municipal curve).&lt;br /&gt;It&#39;s interesting that these paradigms are so vastly different from one another. &lt;br /&gt;It&#39;s also interesting to contrast these lines of thought with that of value investing.  Value investing seems to thrive on the idiosyncracy of individual stocks.  And yet that is what in some ways kills quant strategies.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Thoughts on Implementation of an Exclusive Trading Model&lt;/strong&gt;&lt;br /&gt;The question which inevitably pops up is how you actually implement an exclusive model.  There may be some theory which is more established, but I think I&#39;ve come up with a decent work-around.  First of all your dataset will of course have to be reasonably large.  Even then, the question becomes how one can create a truly homogeneous set of securities when securities have so many differentiating characteristics.&lt;br /&gt;&lt;br /&gt;Well, how about this-find the largest group of securities with a reasonable sample size that is as homogeneous as you can possibly make it.  I&#39;d call it the path of smallest descent.  Lets say you&#39;ve got a humongous database and you query for data through this a program (ie. SQL).  Then scan through all of your variables and identify the one which, when fixed, leads to the smallest decrease in securities.  Then do that again.  And again.  And so on until you are left with the biggest possible generic and homogeneous set of securities as you can find.  If you have exhausted all of your variables and you still have a good sample size from which you can get statistically significant insights, good for you. Typically that&#39;s not possible if your dataset is granular enough, in which case things get uglier.  You start relaxing some of the fixations.  You allow for more than one moving part at a time.  But if this is the case, then now you have a new objective- relax the fixations which pollute any inference you want to make the least.  If you want to examine the behavior of 20 year bonds, for example, you might want to consider making that a range from 19 to 21.  Or at the very least, if you want to make an inference on how variable A affects yield, and you need to let one other variable float, it would probably be best if that variable didn&#39;t have any sort of systematic relationship to variable A.  That way, on average, your inference on variable A should still be correct.&lt;br /&gt;&lt;br /&gt;That&#39;s just a start.  The guiding theme is to make sure that you are making clean inferences.  Clean inferences come about when all polluting factors are held constant.  So once you reach whatever conclusions you wanted to reach with your relatively small generic set, expand that set by allowing a new parameter to vary, then solve for how that new parameter affects your system.  And so on.  It&#39;s an iterative process which takes a long time.  It might not be the best way to go about trading, but it is capable of using your entire dataset and it&#39;s highly specific.&lt;br /&gt;&lt;br /&gt;The methodology above is interesting but not always useful, and probably doesn’t jive well at all with how the typical value investor thinks about investments.  The way I see it, we have a sort of mental playbook which we cycle through when analyzing a stock.  Is it a stock which is beaten down hard but has had strong profit growth over the past 5 years, historically strong margins and what have you?  This is an exclusive way of looking at the market, whether we call it that or not.  We are mentally filtering the market down to very specific subsets, excluding all the rest, knowing well that there are probably a large number of stocks which have as much or more potential than the ones we’re looking at.  It might be of value to chew on this a little.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113493770986582825/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113493770986582825' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113493770986582825'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113493770986582825'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/12/exclusive-vs-inclusive-thoughts-on.html' title='Exclusive vs Inclusive; Thoughts on Model Building'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-113325850366769768</id><published>2005-11-29T01:12:00.000-08:00</published><updated>2005-11-29T02:25:57.676-08:00</updated><title type='text'>Estimation versus Decision Making; Thoughts on Asymmetric Cost Functions; Thoughts on Stability; Generalization</title><content type='html'>I&#39;ve been thinking a lot about prediction lately because of work I&#39;ve been tooling around with and had some thoughts I&#39;d like to bounce off people.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;The Dilemma&lt;/em&gt;&lt;br /&gt;I&#39;ve been tooling around with more useful ways to generate useful relationships between input and output variables and had the following worry (this is pretty basic; sorry everyone). Least squares minimizes the sum of squared residuals. Normal neural networks also use MSE as their objective function which they try to minimize... through error back propagation instead of through a few simple statistics. The thing about MSE, though, is that it implicitly assumes that positive residuals are just as costly as negative residuals. In other words, if my program predicts that a stock will return 10% over a 3 month horizon, it&#39;s just as bad if the stock actually returns 12% as it is if it returns 8%. This obviously doesn&#39;t jive with intuition on a couple levels. Not only is incremental loss worse than incremental gain, losses generate more dis-utility than gains generate utility. I honestly don&#39;t remember off the top of my head what the actual multiple is, but this is pretty established through controlled experiments. Therefore, I was thinking it is then not terribly accurate to use algorithms which minimize MSE. Why not tweak the cost function to make the residual error condition on the sign of the error, so that one becomes more sensitive to losses? In this way, if one can find sub-segments of ones data which, post-regression, still show up with nice expected returns, one can rest more assured that such trades are indeed good trades to put on, and place your bets accordingly.&lt;br /&gt;&lt;br /&gt;Seemed to make some sense. Stunk in practice.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Issues with Asymmetric Cost Functions&lt;/em&gt;&lt;br /&gt;The issue with what I just described is that it&#39;s combining two things-- estimation and decision making. The regression estimates; the asymmetric cost function adjusts ones decision. It&#39;s not right to combine these things, in my opinion.&lt;br /&gt;&lt;br /&gt;Standard regression and a traditional neural network are pure estimators of the multivariate relationships within ones data. &quot;Pure&quot; in the sense that given an asymptotically large amount of data, one should theoretically converge to the true stochastic relationship. If one can&#39;t, then ones algorithm isn&#39;t a good estimator, and it&#39;s hard to have faith in the results. Estimation is looking to give its best guess of the current situation, paying no heed to utility or cost.&lt;br /&gt;&lt;br /&gt;I tend to think that once the estimation has been done, only then is it valid to begin the decision making process. Now if the relationships in ones data were completely deterministic, there would be no need for decision making. The estimator would leave you with zero residuals, and one could trade away, theoretically, as long as it&#39;s legitimate to assume that the future will behave like the past.&lt;br /&gt;&lt;br /&gt;Of course in the real world there is stochasticity. One cannot eliminate it. So in my opinion, the proper way to go about incorporating asymmetric cost is to take the original data set, run the best pure estimation algorithms you&#39;ve got, and base your decision making on the resulting residual plot. Of course when I say algorithms I&#39;m also factoring in not only a neural net and/or regression model, but also the resulting residual analysis, looking for serial autocorrelation and returns-based factors. The whole deal. Net it all out. What I want is a set of residuals which has no trending. I believe that a pure choice can be made on the distributional properties of this set of residuals. If I can safely assume this normalized set of residuals to have some volatility clustering, any other conditional volatility effects, and some other effects which perhaps I can&#39;t really explain, then I&#39;m finally getting somewhere with regards to decision making.  Maybe I can go back into other datasets and find some possible reason why the process behaved the way it did during its abnormal period, and do my best to generalize that in such a way that I can incorporate some risk of that happening again in some form. &lt;br /&gt;&lt;br /&gt;I guess I&#39;m essentially fundamentally splitting the concept of risk from the concept of return and dealing with each entirely separately.  Expected returns are what they are.  One has used algorithms which hopefully give you unbiased estimates of them.  It seems to me that the decision then is simply a function of &quot;risk,&quot; with an elementary adjustment for return.  That makes sense to me. Much more sense than packing everything into the regression itself. It&#39;s sloppy, it doesn&#39;t seem to me to be pure, but perhaps I&#39;m missing something.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Stability&lt;/em&gt;&lt;br /&gt;Another fundamental issue with prediction is stability. Am I looking at a statistical fluke in my data-set, or is this real? There are of course many techniques which can help give good guidance in this direction like bootstrap and cross validation; the general intuition behind bootstrap being &#39;well, just how different &lt;em&gt;is&lt;/em&gt; this from noise, if I were to make the assumption that I &lt;em&gt;am&lt;/em&gt; indeed looking at noise?&#39;, and the intuition behind cross validation being &#39;wait a minute, wouldn&#39;t it make more sense to get some feel for how my predictive algorithm does on out of sample data, on average?&#39; Which of course naturally leads to PMSE.&lt;br /&gt;&lt;br /&gt;Anyways, one perhaps &quot;dumb&quot; way to increase stability is as follows. Feedback heavily encouraged btw. Take the hypothetical case that I am trying to predict 3 month forward returns. I can of course simply gather the 3 month forward return for all stocks. However this could be a major strain if I am dealing with a small-ish data set and a lot of predictor variables. The problem I see with this is that 3 months is, in some ways, arbitrarily fixed, and it is just one data point among hypothetically tons. The POINT of the regression is in some ways to be able to identify outperformance. Outperformance over 3 months is awesome, but there is nothing special about that number. Therefore, throw in 1 month, 3 months and 5 months, for example, and minimize the aggregate cost function on all three. If the results at 3 months were a statistical fluke, it would be more likely that that stock would then underperform over the two other time horizons. Conversely, if the stock is truly an outperformer, it would have heightened probability of outperformance at 1 month and 5 months as well. By throwing in additional outputs which straddle yours, you stabilize your results, I would think. But I could be wrong.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Generalization&lt;/em&gt;&lt;br /&gt;I really think good decision making comes down to properly delineation between risk and return, and how one can go about gaining confidence in one&#39;s estimate of the two.   It&#39;s so simple to say.  If only it were easier to implement in practice!</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/113325850366769768/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/113325850366769768' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113325850366769768'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/113325850366769768'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/11/estimation-versus-decision-making.html' title='Estimation versus Decision Making; Thoughts on Asymmetric Cost Functions; Thoughts on Stability; Generalization'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112953591683765786</id><published>2005-10-17T00:48:00.000-07:00</published><updated>2005-10-17T00:58:36.846-07:00</updated><title type='text'>Heavy Short Interest in ETF&#39;s</title><content type='html'>Sorry for the lack of entries of late.  I can&#39;t talk about what I&#39;ve been doing recently.&lt;br /&gt;&lt;br /&gt;Interesting &lt;a href=&quot;http://www.marketwatch.com/news/archivedStory.asp?archive=true&amp;dist=ArchiveSplash&amp;amp;siteid=mktw&amp;guid=%7B305171C2%2D23FB%2D445B%2D8719%2D89F11F8AF8EF%7D&amp;amp;returnURL=%2Fnews%2Fstory%2Easp%3Fguid%3D%7B305171C2%2D23FB%2D445B%2D8719%2D89F11F8AF8EF%7D%26siteid%3Dmktw%26dist%3D%26archive%3Dtrue%26param%3Darchive%26garden%3D%26minisite%3D&quot;&gt;article&lt;/a&gt; today about short interest in ETF&#39;s.  There apparently are now a half dozen ETF&#39;s with short interest levels greater than 100%, with the weighted average is around 21%. Needless to say it&#39;s concentrated in popular sectors which may require sector-specific hedging to remain sector-neutral-- gold and oil.  I didn&#39;t even know it was possible for an ETF to have a short interest of 308% (Retail HOLDRS--&#39;RTH&#39;). &lt;br /&gt;&lt;br /&gt;I honestly am not sure what the profit implications are to an ETF sponsor with such high short interest.  Are the economics the same?  I would assume that healthy short interest would be a boon for ETF&#39;s, since one of their primary purposes is as a hedge.  Again no need to beat a dead horse but I would hope that puts WSDT at a natural 21% discount to the average ETF... or else something is seriously wrong.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112953591683765786/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112953591683765786' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112953591683765786'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112953591683765786'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/10/heavy-short-interest-in-etfs.html' title='Heavy Short Interest in ETF&#39;s'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112909962899664971</id><published>2005-10-11T22:53:00.000-07:00</published><updated>2005-10-12T00:01:53.016-07:00</updated><title type='text'>Rydex Preaches &quot;Essential Portfolio Theory&quot;</title><content type='html'>&lt;strong&gt;Rydex Introduces &quot;Essential Portfolio Theory&quot;&lt;/strong&gt;&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;General&lt;/strong&gt;&lt;br /&gt;Fancy name but essentially what they&#39;re trying to preach is diversification across asset classes-- not only stocks and bonds but also real estate, commodities, and much more. In doing so, they intend to provide value through diversification, hopefully moving up that efficient frontier through the use of relatively less correlated assets. Additionally, though, one must ask the question-- would individual investors even know what the efficient composition should be of these asset classes, even if one knows that diversification is a good thing? Probably not. Rydex can spend some bucks on a few geniuses and then spread the overhead over the hopefully large number of people who end up buying into the ETF&#39;s. Obviously this is something an individual investor could only do through concerted effort and much more resources expended.&lt;br /&gt;&lt;br /&gt;Rydex&#39;s claim is that a strategy like this one used to only be available to institutional investors, but Rydex intends to bring them to individual investors. This makes sense. I wonder just how much turnover there is relative to some of the more traditional indexation strategies, but my guess is that it isn&#39;t bad.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;People Involved&lt;/strong&gt;&lt;br /&gt;Princeton professor &lt;a href=&quot;http://www.princeton.edu/~mulvey/index.htm&quot;&gt;John Mulvey&lt;/a&gt; is helping Rydex in the construction of EPT-based portfolios. Given the information provided about his &lt;a href=&quot;http://www.princeton.edu/~mulvey/consulting.htm&quot;&gt;consulting&lt;/a&gt; background, and his expertise in large-scale optimization models, it seems pretty clear that he is doing some linear or non-linear programming for portfolio optimization purposes. For those who are unfamiliar with how these programs work I&#39;ll attempt to shine some light on the subject.&lt;br /&gt;&lt;br /&gt;Linear and non-linear programs maximize something called an objective function subject to a set of constraints. For example assume that you are an institutional money manager and cannot put more than 1% of your wealth in any individual stock, no more than 10% of your wealth in any particular sector, cannot go short, can only invest in equities in the US and in China, and know that your investor base is primarily looking for a slow and steady return with little volatility. One could structure a linear program to create an efficient or optimal portfolio, given some past price (and perhaps volume) data on the instruments you are allowed to invest in. Rebalancing could be done every so often by re-running the program which is trained on perhaps some sort of a rolling time horizon and/or forgetting time (both of which can also be tweaked, although one must watch out for non-stationarity and overfitting as usual). The optimal program would probably be something along the following:&lt;br /&gt;&lt;br /&gt;Minimize the volatility of your portfolio holdings {X(1),X(2),X(3),...X(N)}, where X(1...N) comprise the weightings of each stock which can be in your portfolio, subject to the constraint that&lt;br /&gt;(1) your expected annual return is R(target),&lt;br /&gt;(2) {X(1),X(2),X(3),...X(N)} must all be less than or equal to .01,&lt;br /&gt;(3) {S(1),S(2),(3),...S(n)}, your corresponding sector weightings, must all be less than or equal to .1,&lt;br /&gt;(4){X(1),X(2),X(3),...X(N)} must all be greater than or equal to 0 to avoid going short,&lt;br /&gt;...&lt;br /&gt;etc etc. 1...N encapsulates the constraint on the universe of potential holdings, and R(target) is probably a spread off of the risk free rate.&lt;br /&gt;&lt;br /&gt;Turning to an EPT portfolio, then, one will probably see something similar to this.  Perhaps they are trying to maximize returns subject to a minimum level of volatility.  Their investment universe 1...N is most certainly quite large to account for the broad number of asset classes being drawn upon.  Their rebalancing time is probably pretty big so as to keep turnover low.  Finally it is only reasonable to assume that they are factoring in the differential transaction costs between these different asset classes, because it is most certainly more expensive to trade, say, a corporate bond in a local Brazilian paper company than it is to buy up a handful of shares of IBM.&lt;br /&gt;&lt;br /&gt;Finally, given the fact that I assume these EPT portfolios are going to be around for a while, I would assume that the portfolios would be conditioned to be multi-period optimal as well through the use of simulation or a variant of the Kelly criterion or something like that. &lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Last Bits of News on This and Bigger Picture Perspective&lt;/strong&gt;&lt;br /&gt;Not much else to say. Mulvey and Reilly, the Director of Fund Research at Rydex, spoke in Manchester on June 27th and June 28th regarding EPT as part of a very large conference which included the likes of Colin Powell. So the word has been out for a few months already.&lt;br /&gt;&lt;br /&gt;Overall it seems that the ETF world is moving in the direction WSDT is moving in. These Rydex ETF&#39;s and some of the promotional material they&#39;ve been spitting out smack of a more active, more broadly diversified, generally more innovative breed of ETF&#39;s. Like I&#39;ve said before, conditional on WSDT releasing something of value with little overlap to these other more innovative ETF&#39;s, this sort of movement is indeed quite good. When Rydex goes over the news wire saying that Modern Portfolio Theory could use some help given the competitive nature of today&#39;s market environment, and that more needs to be done by the individual investor should the individual investor want to meet his or her investing goals over the longer term, Rydex is basically saying (IMHO) &quot;I will pony up a lot of money to educate these stupid people who just don&#39;t understand that they are investing sub-optimally on a risk adjusted basis and could use the helping hand of firms like ours and WSDT who provide lower cost, more efficient investment products.&quot;&lt;br /&gt;&lt;br /&gt;That being said, still left wondering what WSDT is going to do.&lt;br /&gt;-Dan&lt;br /&gt;&lt;br /&gt;ps. Funny to note-- so on the one hand you have WSDT which is heavily based out of Wharton, whose head of fund research is a Wharton professor. On the other hand you have Rydex which is coming out with innovative indices and is drawing on the brainpower of a Princeton professor. It seems we cannot escape this rivalry between Princeton and Wharton (go Wharton!).</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112909962899664971/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112909962899664971' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112909962899664971'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112909962899664971'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/10/rydex-preaches-essential-portfolio.html' title='Rydex Preaches &quot;Essential Portfolio Theory&quot;'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112831755877636751</id><published>2005-10-02T22:25:00.000-07:00</published><updated>2005-10-02T22:34:03.296-07:00</updated><title type='text'>Thoughts on Insider Trading in Small Caps vs. Large Caps</title><content type='html'>Don&#39;t have much time to write but I just thought I&#39;d share a few thoughts regarding illegal insider trading. After reading about the recent Citizens insider trading case, one might wonder why there seem to be so few cases of insider trading in small companies. Two reasons come to mind.&lt;br /&gt;&lt;ol&gt;&lt;li&gt;Journalists don&#39;t care much for the smaller stories. People don&#39;t know the companies, the amount of money being made or lost is typically smaller, and in general there&#39;s little ability for journalists to sensationalize the story. &lt;/li&gt;&lt;li&gt;Might a similar logic hold true for the SEC? I&#39;ve heard that this may very well be the case! In an ideal world, it would be great for the SEC to go after each and every case of insider trading. But the sad fact is that they are constrained by their resources. This naturally causes them to deal primarily with bigger companies and bigger trades. &lt;/li&gt;&lt;/ol&gt;&lt;p&gt;I&#39;m not endorsing people to go out and try to obtain material non-public information from small companies. That being said, it makes an interesting case for insider trackers. Maybe it&#39;s more profitable to follow the little guys, not only because small cap companies tend to be less followed, but also because the insiders themselves may know that they are less exposed to the headline and legal risks which their big cap counterparts are exposed to.  This double whammy might create interesting profit opportunities. &lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112831755877636751/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112831755877636751' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112831755877636751'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112831755877636751'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/10/thoughts-on-insider-trading-in-small.html' title='Thoughts on Insider Trading in Small Caps vs. Large Caps'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112805473025010098</id><published>2005-09-29T19:52:00.000-07:00</published><updated>2005-09-29T21:42:41.643-07:00</updated><title type='text'>What We Can and Cannot Take Away From Clinical Studies Regarding Investments; A Generalization</title><content type='html'>An interesting way to increase the information set from which we make judgements on human judgement relative to quantitative estimation in an investment framework is to, of course, draw on similar comparisons from other disciplines.&lt;br /&gt;&lt;br /&gt;One such discipline is clinical studies (props to Kelvin for the article). A number of &lt;a href=&quot;http://scholar.google.com/scholar?as_q=clinical+versus+actuarial+judgement&amp;num=10&amp;amp;btnG=Search+Scholar&amp;as_epq=&amp;amp;as_oq=&amp;as_eq=&amp;amp;as_occt=any&amp;as_sauthors=&amp;amp;as_publication=science&amp;as_ylo=&amp;amp;as_yhi=&amp;as_allsubj=all&amp;amp;amp;amp;amp;amp;amp;hl=en&amp;lr=&amp;amp;safe=off&quot;&gt;papers&lt;/a&gt; have dived into comparisons of the two, making clear that much of the estimation currently done in the medical field by doctors and the like should actually probably be made through actuarial methods. The bottom line, it seemed, was that actuarial methods dominate their clinical counterparts in almost every study that has been performed. The tests usually consist of assessing the probability of having a machine and a clinician make a judgement about the nature of a person&#39;s illness given the same dataset, and comparing the respective frequencies. Even when clinicians are given an informational advantage, they still don&#39;t beat their machine counterpart. In many cases, the new information doesn&#39;t help them at all.&lt;br /&gt;&lt;br /&gt;I would encourage people read some of the findings-- it&#39;s really interesting stuff! But before everyone goes off and becomes quants there are a few things that should be noted; the caveat is that investing is not the same as making a handful of prognoses at a hospital.&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Unlike in a hospital setting where everyone needs to get diagnosed (deferring judgement isn&#39;t an option), investors have the liberty to avoid that which they have no &quot;edge&quot; on. Charlie Munger comes to mind. In some ways he does precisely the opposite of what the clinician is told to do. He sits on his hands and waits until he sees what he perceives to be a huge opportunity and he puts on a position in size. Market making is another story. &lt;/li&gt;&lt;li&gt;Incidentally, this is why I think many rapid fire trading strategies tend to be &lt;strong&gt;short vol&lt;/strong&gt;. Processes assume a certain set of statistical properties until they don&#39;t. Shocks to the system and regime shifts don&#39;t lend themselves well to automated models, which might have a difficult time assessing when it&#39;s time to re-evaluate the model. I would tend to say along these lines that Charlie Munger&#39;s methodology is &lt;strong&gt;long vol&lt;/strong&gt;.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Liquidity&lt;/strong&gt; removes some of the comparability between the two fields. In some sense I guess there is no liquidity in the medical world-- you make the choice, then are subject to a binary outcome-- yes or no. In the markets it has implications on competition and hence efficiency, transaction costs, market impact, etc. &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;That being said, some of the criticisms of clinician&#39;s assessments reminds me very strongly of the psychological biases subject to investors. Overconfidence when it isn&#39;t merited (being Fooled By Randomness), viewing historical events as more causal and less random than they actually were, the phenomenon of being flooded by data to the point that judgement is actually impeded, misconceived disdain for aggregate statistics, improper and randomly varying factor weighting... these are universal decision making problems. &lt;/p&gt;&lt;p&gt;Given the nature of the decisions being made by clinicians, it makes a lot of sense to me that a quantitative framework is more appropriate. That being said, I don&#39;t believe the same is necessarily true of the financial markets. Or perhaps I am being fooled by personal bias :)&lt;/p&gt;&lt;p&gt;-Dan&lt;/p&gt;&lt;p&gt;ps. In the same way that one goes about increasing ones information set by looking at comparable situations, the same can be said of stocks. Of course we all know the age old trick of looking at comps. I&#39;m actually referring to estimating &quot;comparability&quot; by whipping out the time series of a stock with every other stock in the market and rank ordering them in terms of absolute value. Of course it might be of value to decrease the resolution of the series to get a more fitting view of reality. One may also want to make other adjustments. But the bottom line is this-- there are some stocks out there which have correlations over the past year that are literally up around 60%. This is ridiculously high. One will also find that certain industries just happen to correlate more than other industries. This has profound implications on our ability to make individual stock bets. &lt;/p&gt;&lt;p&gt;Why should I look at Beta? We look at Beta because our stocks tend to be positively correlated with the market. But if you actually do out the numbers, with daily resolution the absolute correlations are typically quite low. Betas of 1 or 2 or more are typically are a result of having a much higher relative vol. &lt;/p&gt;&lt;p&gt;Now imagine that you have a stock whose correlation with its industry is around 40 or 50%. Do I want to focus my attention on my one company? Perhaps, but from a risk management point of view, there are marked differences between this and a more statistically disperse industry. &lt;/p&gt;&lt;p&gt;Furthermore, correlation studies have implications on information gathering. And hedge effectiveness. But I will leave that up to my readers to think about.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112805473025010098/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112805473025010098' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112805473025010098'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112805473025010098'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/what-we-can-and-cannot-take-away-from.html' title='What We Can and Cannot Take Away From Clinical Studies Regarding Investments; A Generalization'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112797805967062899</id><published>2005-09-29T00:07:00.000-07:00</published><updated>2005-09-29T00:38:49.886-07:00</updated><title type='text'>Updates</title><content type='html'>I will be a bit on the busy side over the next few weeks. Good news-- I will be a guest speaker at an Information Systems/Information Management Seminar Series in the Operations Research Dept at the University of Pennsylvania on October 28th. Needless to say, the subject is, tentatively, &quot;Web Mining, data integration and the stock market.&quot;&lt;br /&gt;&lt;br /&gt;Hopefully I don&#39;t say or do something stupid, and more importantly, hopefully I actually have something which people might consider interesting!&lt;br /&gt;&lt;br /&gt;Highlight of the week: Last Thursday, I got to shake Jim Simons&#39; hand. Arguably the best hedge fund manager in existence. Quite an honor.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112797805967062899/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112797805967062899' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112797805967062899'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112797805967062899'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/updates.html' title='Updates'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112779118707762475</id><published>2005-09-26T20:19:00.000-07:00</published><updated>2005-09-26T20:19:47.083-07:00</updated><title type='text'>Our Worst Enemy is Ourselves</title><content type='html'>Our Worst Enemy is Ourselves&lt;br/&gt;&lt;br/&gt;Prior to the internet, large scale privacy abuse was all but impossible.&amp;nbsp;&amp;nbsp;When information was stored in physical documents at home, privacy abuse was simply too expensive to scale.&amp;nbsp;&amp;nbsp;The same is not true for information stored on the internet.&amp;nbsp;&amp;nbsp;The density of the internet’s network structure makes it very vulnerable to targeted attacks.&amp;nbsp;&amp;nbsp;Voluntary or not, the social transition to internet connectivity will inevitably lead to a loss of personal privacy, especially as advertisers find increasingly innovative ways to exploit information about us and our social networks.&amp;nbsp;&amp;nbsp;Much of what society now considers private will not be so in 20 years because of the internet.&amp;nbsp;&amp;nbsp;&lt;br/&gt;&lt;br/&gt;There are many legitimate arguments which run contrary to this notion.&amp;nbsp;&amp;nbsp;We value our privacy very highly and have explicitly built it into our Constitution through the Fourth Amendment.&amp;nbsp;&amp;nbsp;We have regulatory groups in place to enforce society’s privacy.&amp;nbsp;&amp;nbsp;These groups have spurred on the creation of laws and acts like the Electronic Communications Privacy Act, declaring that email is a private means of communication and should be subject to the same level of privacy as phone calls and letters.&amp;nbsp;&amp;nbsp;Technology has been created to proactively counter privacy abuse—encryption techniques have become more powerful, and an active market has been built around spam filters.&amp;nbsp;&amp;nbsp;For each virus that has wreaked havoc on networks of computers, there has been an add-on created to neutralize it.&amp;nbsp;&amp;nbsp;Speaking more broadly, our free market system itself has eliminated privacy abuse—problems of the past have created a consumer demand for protection, which in turn has led to the creation of electronic security companies to effectively meet this demand.&lt;br/&gt;&lt;br/&gt;However, can we honestly say that we don’t want to give up our privacy under the right circumstances?&amp;nbsp;&amp;nbsp;While it is indeed of value to us, history has shown that we are willing to voluntarily sacrifice privacy for functionality.&amp;nbsp;&amp;nbsp;Gmail, Facebook, Google Search and VisiblePath are notable recent examples of this.&amp;nbsp;&amp;nbsp;Gmail is perhaps the best free email service available today, with 2.6GB and a very useful search capability.&amp;nbsp;&amp;nbsp;However its useful services come at the expense of privacy—Gmail has robots which scan all of our emails so that it can craft targeted advertisements.&amp;nbsp;&amp;nbsp;Facebook allows students to connect more easily with friends, but asks students to voluntarily disclose personal information like phone numbers, email addresses and interests.&amp;nbsp;&amp;nbsp;Google’s search engine vastly expands users’ ability to retrieve information.&amp;nbsp;&amp;nbsp;Users tacitly compensate Google by allowing Google to bombard them with advertisements tailored by prior search history and location.&amp;nbsp;&amp;nbsp;VisiblePath scours the social networks of employees systematically through their emails and address books to identify potential connections with other corporations.&amp;nbsp;&amp;nbsp;This improves corporate efficiency at the expense of employee privacy.&amp;nbsp;&amp;nbsp;137M US citizens, 45% of the current US population, use the internet. 84% of these users regularly use search engines like Google, and 92.5% regularly use email services like Gmail.&amp;nbsp;&amp;nbsp;These percentages will inevitably continue to grow, making it all the more profitable for companies and advertisers to innovate and expand their offerings.&amp;nbsp;&amp;nbsp;Are we going to enact regulations we don’t want to enact?&amp;nbsp;&amp;nbsp;Is the free market system going to create products that respect user privacy but have no consumer demand?&amp;nbsp;&amp;nbsp;Our problem, if it is even valid to call it one, is that we &lt;em&gt;want &lt;/em&gt;to give up our privacy.&amp;nbsp;&amp;nbsp;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112779118707762475/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112779118707762475' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112779118707762475'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112779118707762475'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/our-worst-enemy-is-ourselves.html' title='Our Worst Enemy is Ourselves'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112770089398761107</id><published>2005-09-25T19:14:00.000-07:00</published><updated>2005-09-25T19:14:53.993-07:00</updated><title type='text'>Yahoo Stock R Mining Functions</title><content type='html'>&lt;span style=&quot;font-family:Georgia;&quot;&gt;Here are some functions which may be of use to those of you who use R.  Gotta do my part for the open source movement!  Pretty tame. &quot;yimp&quot; gathers price data for an arbitrary number of stocks over an arbitrary time period.  &quot;ksImport&quot; gathers a handful of key statistics for whatever stocks you want and throws them into a list.  Check it out.  If anyone has any follow-ups, corrections or comments please feel free to email me.&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-Danny&lt;/span&gt;&lt;br/&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;#Yahoo Price History Gatherer -- for example, yimp(c(&quot;IBM&quot;,&quot;GE&quot;),20050101,20050901)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;yimp &amp;lt;- function(ticker.list,start.date, end.date, data=TRUE, plot=FALSE){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Source = &quot;http://ichart.finance.yahoo.com/table.csv?&quot; &lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;startmonth &amp;lt;- as.numeric(substring(start.date,5,6))-1&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;endmonth &amp;lt;- as.numeric(substring(end.date,5,6))-1&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;nstocks &amp;lt;- length(ticker.list)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for(i in 1:nstocks){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(startmonth &amp;lt;10){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;startmonth &amp;lt;- paste(&quot;0&quot;,startmonth,sep=&quot;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(endmonth &amp;lt;10){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;endmonth &amp;lt;- paste(&quot;0&quot;,endmonth,sep=&quot;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Query &amp;lt;- paste(&quot;&amp;a=&quot;, startmonth,&quot;&amp;b=&quot;, as.numeric( substring( start.date,7,8) ),&quot;&amp;c=&quot;, as.numeric( substring( start.date,1,4)),&quot;&amp;d=&quot;, endmonth,&quot;&amp;e=&quot;, as.numeric( substring( end.date,7,8)),&quot;&amp;f=&quot;,as.numeric( substring( end.date,1,4)), &quot;&amp;g=d&amp;ignore=.csv&quot;,sep=&quot;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;download.file( url=paste( Source,&quot;&amp;s=&quot;,ticker.list[i],Query,sep=&quot;&quot;),destfile= &quot;tempfile&quot;,quiet=TRUE )&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp&amp;lt;- read.delim(&quot;tempfile&quot;,sep=&quot;,&quot;,as.is=TRUE,fill=TRUE)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp &amp;lt;- temp[,c(&quot;Date&quot;,&quot;Adj..Close.&quot;)]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(temp) &amp;lt;- c(&quot;Date&quot;,ticker.list[i])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;time &amp;lt;- sub(&quot;-&quot;,&quot;&quot;,sub(&quot;-&quot;,&quot;&quot;,temp[,&quot;Date&quot;]))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;tempnames &amp;lt;- colnames(temp)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp &amp;lt;- data.frame(strptime(time,&quot;%d%b%y&quot;),temp[,2])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(temp) &amp;lt;- tempnames&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(plot==TRUE){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;windows()&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;plot( x=temp[,&quot;Date&quot;], y=temp[,ticker.list[i]], type=&quot;l&quot;,col=&quot;blue&quot;,lwd=1, main=paste(&quot;Prices for &quot;,ticker.list[i],&quot; from &quot;, temp[1,1],&quot; to &quot;,temp[nrow(temp),1],sep=&quot;&quot;), xlab=paste(&quot;Date&quot;,sep=&quot;&quot;), ylab=&quot;Price&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;end &amp;lt;- nrow(temp)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;mid &amp;lt;- mean(temp[,ticker.list[i]])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;sdup &amp;lt;- mean( temp[,ticker.list[i]]) + sd(temp[,ticker.list[i]])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;sddown &amp;lt;- mean( temp[,ticker.list[i]]) - sd(temp[,ticker.list[i]])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;lines(c( temp[1,1],temp[nrow(temp),1]),c(mid,mid), col=&quot;red&quot;,lwd=2)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;lines(c( temp[1,1],temp[nrow(temp),1]),c(sdup,sdup), col=&quot;red&quot;,lwd=1)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;lines(c( temp[1,1],temp[nrow(temp),1]),c(sddown,sddown), col=&quot;red&quot;,lwd=1)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(i ==1){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- temp&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(i !=1){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(nrow(temp)&amp;gt;nrow(list)){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;#if the temp is larger than list, then set the temp dates as the list dates, append to all&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;#columns in the small list NA&#39;s until they match in length to temp, then append temp to the&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;#end. &lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list2 &amp;lt;- list&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list2names &amp;lt;- colnames(list)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;tempnames &amp;lt;- colnames(temp)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- temp[,1]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;oldlength &amp;lt;- nrow(list2)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for(k in 2:ncol(list2)){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;newtemp &amp;lt;- as.numeric( append(list2[,k],rep(&quot;NA&quot;,(nrow(temp)-oldlength))))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- data.frame(list,newtemp)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(list) &amp;lt;- c(colnames(list)[1:(k-1)],list2names[k])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(list) &amp;lt;-c( tempnames[1],colnames(list)[2:ncol(list)])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- data.frame(list,temp[,2])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(list) &amp;lt;- c( colnames(list)[1:(ncol(list)-1)], tempnames[2])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;#Note: this makes the assumption that up until we have no price data for a particular stock, all stocks in&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;#the set trade on the same days. This will be true almost all the time, except for instances in which a&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;#particular stock is forced to cease trading (for example, for regulatory reasons).&amp;nbsp;&amp;nbsp;I have yet to see an&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;#instance of this, but it could very well happen I would imagine, unless yahoo corrects for this. &lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(nrow(list)&amp;gt;nrow(temp)){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;tempname &amp;lt;- colnames(temp)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;newtemp &amp;lt;- as.numeric( append(temp[,2],rep(&quot;NA&quot;,(nrow(list)-nrow(temp)))))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- data.frame(list,newtemp)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(list) &amp;lt;- c( colnames(list)[1:(ncol(list)-1)],tempname[2])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(nrow(temp)==nrow(list)){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- data.frame(list,temp[,ticker.list[i]])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(list) &amp;lt;- c( colnames(list)[1:(ncol(list)-1)],ticker.list[i])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(ticker.list)&amp;gt;=3){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;list &amp;lt;- list[,-4]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;if(data==TRUE){return(list)}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;#Key Statistics Importer -- Grab a handful of Key Statistics (ie. ksImport(query=&quot;IBM&quot;))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;ksImport &amp;lt;- function( file = &quot;tempfile&quot;,source1 = &quot;http://finance.yahoo.com/q/ks?s=&quot;, source2 = &quot;http://finance.yahoo.com/q/in?s=&quot;,query){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;pointer &amp;lt;- &quot;:&amp;lt;/td&quot;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;offset = 2&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;nstocks &amp;lt;- length(query)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;keynames = c( &quot;Market Cap &quot;, &quot;Enterprise Value &quot;, &quot;Trailing P/E &quot;, &quot;Forward P/E &quot;, &quot;Price/Book &quot;, &lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;Enterprise Value/EBITDA &quot;, &quot;Trailing Annual Dividend &quot;, &quot;EBITDA &quot;, &quot;Net Income Avl to Common &quot;, &lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;Revenue &quot;, &quot;Total Cash &quot;, &quot;Total Debt &quot;, &quot;Average Volume &quot;,&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;Shares Short &quot;, &quot;Shares Outstanding:&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp = as.character(Sys.Date())&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;stats &amp;lt;- matrix(0,(length(keynames)+2),nstocks)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;for(j in 1:nstocks){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp = as.character(Sys.Date())&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;url1 = paste(source1, query[j], sep = &quot;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;download.file(url1, file, quiet=TRUE)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;x = scan(file, what = &quot;&quot;, sep = &quot;&amp;gt;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(&quot;no longer valid&quot;,x))!=0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;query[j] &amp;lt;- strsplit(x[grep(&quot;no longer valid&quot;,x)],split=&quot;?s=&quot;)[[1]][2]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;url1 = paste(source1, query[j], sep = &quot;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;download.file(url1, file, quiet=TRUE)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;x = scan(file, what = &quot;&quot;, sep = &quot;&amp;gt;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(&quot;There is no&amp;nbsp;&amp;nbsp;data available&quot;,x))!=0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;stats[,j] &amp;lt;- &quot;NA&quot;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(&quot;Invalid Ticker Symbol&quot;,x))!=0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;stats[,j] &amp;lt;- &quot;NA&quot;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(query[j]==&quot;&quot;){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;stats[,j] &amp;lt;- &quot;NA&quot;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(sum(nchar(x)&amp;gt;15000)!=0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;x &amp;lt;- strsplit(x[nchar(x)&amp;gt;15000],split=&quot;&amp;gt;&quot;)[[1]]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(query[j]!=&quot;&quot;){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(&quot;There is no&amp;nbsp;&amp;nbsp;data available&quot;,x))==0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(&quot;Invalid Ticker Symbol&quot;,x))==0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for (s in keynames) {&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;loc &amp;lt;- grep(s,x)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if((s==&quot;EBITDA &quot;)&amp;(length(loc)!=1)){loc &amp;lt;- loc[2]}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if((s==&quot;Revenue &quot;)&amp;(length(loc)!=1)){loc &amp;lt;- loc[2]}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if((s==&quot;Total Cash &quot;)&amp;(length(loc)!=1)){loc &amp;lt;- loc[1]}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if((s==&quot;Average Volume &quot;)&amp;(length(loc)!=1)){loc &amp;lt;- loc[1]}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(( s==&quot;Trailing Annual Dividend &quot;)&amp;(length(loc)!=1)){loc &amp;lt;- loc[2]}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if((s==&quot;Shares Short &quot;)&amp;(length(loc)!=1)){loc &amp;lt;- loc[1]}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(pointer,x[loc]))==1){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;grepped = paste(sub(&quot;&amp;lt;/td&quot;, &quot;&quot;, x[loc + offset]))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(grep(pointer,x[loc]))==0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;i=1&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;while(length(grep(pointer,x[loc+i]))==0){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;i &amp;lt;- i+1&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;grepped = paste(sub(&quot;&amp;lt;/td&quot;, &quot;&quot;, x[loc +i+offset]))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp = c(temp, grepped)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;url2 = paste(source2,query[j],sep=&quot;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;download.file(url2, file, quiet=TRUE)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;x = scan(file, what=&quot;&quot;,sep=&quot;&amp;gt;&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;s=&quot;Industry:&quot;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;grepped = paste(substring(sub(&quot;&amp;lt;/b&quot;, &quot;&quot;, x[grep(s, x)][2]),11))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp = c(temp, grepped)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;stats[,j] &amp;lt;- temp&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;for (i in 1:length(keynames)) {keynames[i] = substr(keynames[i], 1, nchar(keynames[i]) - 1)}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;keynames = c(&quot;Date&quot;, keynames,&quot;Industry&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output &amp;lt;- data.frame(cbind(Keyname = keynames, Statistic = stats))&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(output) &amp;lt;- c(colnames(output)[1],query)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;#tidying up the format&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output &amp;lt;- t(output)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(output)&amp;lt;- output[1,]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output &amp;lt;- output[-1,]&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;names &amp;lt;- colnames(output)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(query)==1){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output[&quot;Industry&quot;] &amp;lt;- sub(&quot;&amp;amp;&quot;,&quot;&amp;&quot;,output[&quot;Industry&quot;])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(output[&quot;Trailing Annual Dividend&quot;]==&quot;&quot;){output[&quot;Trailing Annual Dividend&quot;] &amp;lt;- 0}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output[&quot;Trailing Annual Dividend&quot;] &amp;lt;- sub(&quot;%&quot;,&quot;&quot;,output[&quot;Trailing Annual Dividend&quot;])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if(length(query)&amp;gt;1){&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output[grep(&quot;&amp;amp;&quot;,output[,&quot;Industry&quot;]),&quot;Industry&quot;] &amp;lt;- sub(&quot;&amp;amp;&quot;,&quot;&amp;&quot;,output[grep(&quot;&amp;amp;&quot;,output[,&quot;Industry&quot;]),&quot;Industry&quot;])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output[output[,&quot;Trailing Annual Dividend&quot;]==&quot;&quot;,&quot;Trailing Annual Dividend&quot;] &amp;lt;- 0&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output[grep( &quot;%&quot;,output[,&quot;Trailing Annual Dividend&quot;]),&quot;Trailing Annual Dividend&quot;] &amp;lt;- sub(&quot;%&quot;,&quot;&quot;,output[grep(&quot;%&quot;,output[,&quot;Trailing Annual Dividend&quot;]),&quot;Trailing Annual Dividend&quot;])&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output &amp;lt;- data.frame(rownames(output),output)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;colnames(output) &amp;lt;- c(&quot;ticker&quot;,&quot;date&quot;,&quot;mktcap&quot;,&quot;EV&quot;,&quot;PEttm&quot;,&quot;PEfwd&quot;,&quot;PtoB&quot;,&quot;EVtoEBITDA&quot;,&quot;DivYld&quot;,&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;EBITDA&quot;, &quot;NetIncome&quot;,&quot;Revenue&quot;, &quot;TotCash&quot;,&quot;TotDebt&quot;,&quot;AvgVol&quot;, &quot;TotShort&quot;,&quot;TotShares&quot;,&quot;Industry&quot;)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;return(output)&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;}&lt;/span&gt;&lt;br/&gt;&lt;span style=&quot;font-size:78%;&quot;&gt;&lt;/span&gt;&lt;br/&gt;&lt;br/&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112770089398761107/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112770089398761107' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112770089398761107'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112770089398761107'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/yahoo-stock-r-mining-functions_25.html' title='Yahoo Stock R Mining Functions'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112735798943468565</id><published>2005-09-21T18:32:00.000-07:00</published><updated>2006-06-20T21:19:58.773-07:00</updated><title type='text'>IXDP is now WisdomTree Investments (WSDT.PK); Deeper Look at PowerShares</title><content type='html'>[Dated post-- most recent update on WSDT on June 20th 2006 is &lt;a href=&quot;http://thelearningblog123.blogspot.com/2006/06/wisdomtree-update-june-20th-2006.html&quot;&gt;here&lt;/a&gt;]&lt;br /&gt;&lt;br /&gt;It&#39;s &lt;a href=&quot;http://biz.yahoo.com/bw/050921/215880.html?.v=1&quot;&gt;official&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;PowerShares-- Bulking Up Its Management Too?&lt;/strong&gt;&lt;br /&gt;Interestingly enough, it seems that PowerShares is in some ways following WSDT and &lt;a href=&quot;http://www.sys-con.com/read/127788.htm&quot;&gt;upping that management team&lt;/a&gt;. They hired Benjamin Fulton as SVP of Product Development and Edward McRedmond as SVP of Portfolio Strategy.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Taking a Look at Fulton:&lt;/em&gt;&lt;br /&gt;Surprisingly enough, Fulton spent some time at Nuveen Investments. As a side note, WSDT has had some background with Nuveen if I remember correctly; I believe one of their earlier index ideas was in some way related to something Nuveen had created. Thankfully no lawsuits were thrown (I guess WSDT was in the right!). Needless to say nothing came of those indices so nothing to worry about on that front.&lt;br /&gt;&lt;br /&gt;The article linked above mentions his being an MD at Nuveen with a focus on product development. Will most likely be a big logistical help for PS. That being said, Nuveen is an ETF sponsor and &lt;a href=&quot;http://www.nuveen.com/etf/resources/muni_index.aspx&quot;&gt;has introduced ETF&#39;s of its own&lt;/a&gt;, which begs the question. Might it be a little more helpful to hire someone with some real ETF experience, especially if you are poaching an ETF sponsoring firm like Nuveen? If he had any direct ETF background, they would have said so in the articles I would imagine.&lt;br /&gt;&lt;br /&gt;Bottom line IMHO is that this is definitely a step up for PS. Fulton is nothing to shake a stick at. But it might have been nice to have had a little more ETF-specific experience. His background in bringing products to market puts him in a position similar to Morris at WSDT. I like Morris more.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Taking a Look at Edward McRedmond: &lt;/em&gt;&lt;br /&gt;The article linked to above speaks to McRedmond&#39;s background pretty thoroughly. He seems to have done some solid analysis of the ETF space and probably has a stronger grasp of the product than most people. Only question I have about him is why, after a full 17 years at AG Edwards, he couldn&#39;t move any higher than Associate Vice President. At Citigroup, time-adjusted, this doesn&#39;t amount to a super ton.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Getting a Better Picture of PS&#39;s ETFs: &lt;/strong&gt;&lt;br /&gt;They&#39;ve got around 23 ETF&#39;s in total which they basket into four flavors.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;A Closer Look at PS&#39;s Dividend ETF Portfolio: &lt;/em&gt;&lt;br /&gt;One very popular flavor is dividends, the oldest of which is PEY, the High Yield Equity Dividend Achievers ETF. They now have 4 dividend-based ETF&#39;s in total-- the other three are the International Dividend Achievers (PID), Dividend Achievers (PFM), and High Growth Rate Dividend Achievers (PHJ). Statistics on these portfolios are contained below.&lt;br /&gt;&lt;br /&gt;Enough with the boring details-- can you guys see anything interesting about the historical performance statistics? This is not too hard to see.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;p&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center;&quot; alt=&quot;&quot; src=&quot;http://photos1.blogger.com/blogger/3390/1312/400/statistics.jpg&quot; border=&quot;0&quot; height=&quot;300&quot; width=&quot;457&quot; /&gt;... the historical performance of the newly created ETF&#39;s suck, unless I&#39;m really missing something. The Sharpe for the flagship PEY knocks the freaking &lt;u&gt;socks&lt;/u&gt; off of the three new ETF&#39;s. And at the same time, the historical Beta is around half that of the newbies! They are publicly announcing this themselves?&lt;/p&gt;&lt;p&gt;This begs the question-- why the hell should I invest in these other funds if the performance is so much worse, &lt;em&gt;even in the past&lt;/em&gt;??? You can trade all the options you want on these things (yes, on the AMEX you can trade options on the newbs), but returns will not magically appear. Pile onto that the fact that the newbs are probably far more illiquid, and all I can see is a pretty bad deal. But hey, maybe I&#39;m missing something. Moving on!&lt;br /&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Other ETF&#39;s in the Portfolio&lt;/em&gt;: &lt;/p&gt;&lt;p&gt;PS also has a large basket of industry-specific ETFs (ie. Biotech&amp;Genome portfolio, Food&amp;amp;Beverage, Leisure&amp;Entertainment, Pharmas, ...) and another basket of style-specific ETF&#39;s (ie. Value, Growth, varying cap ranges). They have two funds which track the broader market with the Intellidex Enhancement. Finally, they&#39;ve got a couple of weird ones which don&#39;t really fit into any of the above classifications (a China ETF and an alternative energy ETF). The China ETF is basically filled with a bunch of ADRs. I honestly haven&#39;t done too much about it, except that it&#39;s pretty heavily weighted towards oil right now. Me being my usual cynical self, I will just throw a points out here to jab at this China ETF a little, and open up to discussion why this may be the case. Will leave the rest for another day. &lt;/p&gt;&lt;p&gt;Here are hypothetical historically backtested results for &quot;Dragon Halter&quot;: Beta is 1, Sharpe is 1.02, Correlation is 0.5. All statistics are based on the past 3 years relative to the &lt;a href=&quot;http://en.wikipedia.org/wiki/MSCI_EAFE&quot;&gt;MSCI EAFE&lt;/a&gt;, which is supposed to be representative of foreign stocks. As expected, these hypothetical statistics handily beat the MSCI EAFE, which has a sharpe of 0.14 and a beta and correlation, by default, of 1. &lt;/p&gt;&lt;p&gt;They then show their hypothetical performance over the past year, and their &lt;em&gt;actual &lt;/em&gt;performance since inception, as of June 1st 2005. No statistics given for these time periods, except that the performance was markedly worse. Over the past year they had a theoretical return of 4.92%, lagging the S&amp;amp;P and the EAFE. Since actual inception, they have lost money and are currently down 5.40%, while the EAFE and the S&amp;P are up 9.87% and 1.17% respectively. Past performance is not indicative of future results; it seems for this China ETF, we may not even want the hypothetical past performance at all. &lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fundamental Indexing&lt;/strong&gt;&lt;br /&gt;Last thing I thought I would bring up is Bob Arnott from RA. I&lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/07/indexation_14.html&quot;&gt; wrote about him a while back myself&lt;/a&gt;. To recap, I was highly impressed with his study. That being said, it seemed he didn&#39;t fully flesh out statistics driving the implied investment thesis pertaining to mean reversion. &lt;/p&gt;&lt;p&gt;Well, apparently PS (in addition to Allianz) is jumping on the idea and creating an index around Arnott&#39;s research. The expense ratio will be 60 basis points which isn&#39;t bad. That being said, something tells me Arnott will be the winner in this one, making some serious jack on the licensing fees off of two companies. Should one get off the ground (doesn&#39;t matter to him which one, which would explain his licensing to two companies), he will probably be collecting a nice little check. &lt;/p&gt;&lt;p&gt;&lt;strong&gt;So What Is WSDT Thinking About?&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Looking back, there have been a handful of strategies mentioned in studies done by Professor Siegel. One, incidentally, was a dividend study. I guess that space seems taken! The other was a study showing the historical performance of the original stocks in the Dow I believe, and how they haven&#39;t done all that badly if one were to reinvest dividends, reinvest gains from acquired companies, etc etc. I am doubtful that they would somehow base a strategy off of this. &lt;/p&gt;&lt;p&gt;And with Arnott essentially throwing his strategy out among ETF sponsors for them to tear at eachother, I am not too sure they can really do much with a cap-adjustment strategy. &lt;/p&gt;&lt;p&gt;Index volatility-dependent autocorrelation trading anybody? What do you think Chilton? ;)&lt;/p&gt;&lt;p&gt;That&#39;s the latest from me on the enhanced ETF space. &lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112735798943468565/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112735798943468565' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112735798943468565'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112735798943468565'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/ixdp-is-now-wisdomtree-investments.html' title='IXDP is now WisdomTree Investments (WSDT.PK); Deeper Look at PowerShares'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112719566415705890</id><published>2005-09-19T21:53:00.000-07:00</published><updated>2005-09-20T23:44:41.550-07:00</updated><title type='text'>Commentary On The Trouble With Value</title><content type='html'>I&#39;m sure the vast majority of you guys have read this already, but I find this sort of analysis to be really cool. I don&#39;t have a ton of time so I will briefly sum the paper up with some bullets and graphs before making some comments of my own.&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Fact #1: The Market Has it Mostly Right-- P/E Ratio is, in fact, one of the best indicators of relative 1 year forward earnings. &lt;/strong&gt;&lt;br /&gt;-The graph below sums this one up nicely. What it says, for example, is that when the P/E ratio was in the bottom 10% of its history, earnings growth is 23% below average, and conversely when the P/E ratio was in the top 10% of its history, earnings growth was 26% above the mean. I assume they either took the P/E ratio of the market with annual sampling over its history, or they took all companies available at all years, and annually sampled their P/E ratios. The latter might be subject to survivorship bias conditional on the integrity of the dataset.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;p&gt;&lt;img style=&quot;DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center&quot; alt=&quot;&quot; src=&quot;http://photos1.blogger.com/blogger/3390/1312/320/pe%20earnings%20growth.jpg&quot; border=&quot;0&quot; /&gt;&lt;strong&gt;Fact #2: Value Stocks have indeed outperformed the market historically&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;The graph below makes this clear. What it doesn&#39;t show, though, is how volatile this outperformance has been, which is where things start to get interesting. For those who are looking at this for the first time, what it says, for example, is that if all stocks over all years were thrown into buckets ordered by their P/E ratios, and one were to calculate next year&#39;s return relative to the market return that year, the highest bucket underperformed the market on average by 2%, while the lowest bucket outperformed the market by 3%. &lt;/p&gt;&lt;p&gt;&lt;img style=&quot;DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center&quot; alt=&quot;&quot; src=&quot;http://photos1.blogger.com/blogger/3390/1312/320/pe%20next%20yr%20return.jpg&quot; border=&quot;0&quot; /&gt;So what GMO did to dig into this a little more was compare the Russell 1000 Growth index versus the Russell 1000 Value index. The author assumes this to be a good proxy for value versus growth, so perhaps one might want to know exactly what the difference is between the two:&lt;/p&gt;&lt;p&gt;&lt;em&gt;Russell 1000® Growth Index: Measures the performance of those Russell 1000 companies with higher price-to-book ratios and higher forecasted growth values. Is constructred to provide an unbiased barometer of the large-cap growth market.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Russell 1000® Value Index: Measures the performance of those Russell 1000 companies with lower price-to-book ratios and lower forecasted growth values.&lt;/em&gt;&lt;br /&gt;&lt;br /&gt;I would give Russell the benefit of the doubt on this one, but it should be noted that its index doesn&#39;t appear to be constructed perfectly on the basis of Price to Book, and perhaps only loosely based on Price to Earnings or Sales. &lt;/p&gt;&lt;p&gt;GMO&#39;s piece then delves entirely into statistics on P/S and P/B, with P/S, the metric providing the most &quot;trouble with value&quot; but in some sense the least valid at least relative to P/E, explained first.&lt;/p&gt;&lt;p&gt;This begs a question and a hypothesis.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;If the comparison of R1000V versus R1000G is our proxy for value versus growth and both are primarily based on P/B, why would GMO adjust valuations using P/S and P/E? To be totally consistent, if they are going to adjust by P/S, they should construct an alternative index which splits out stocks into 2 buckets based on P/S. Same goes for P/E and P/B. To do otherwise is inconsistent, even though the results may very well be similar! &lt;/li&gt;&lt;li&gt;It seems to me that the P/S example was put forth first because it elicited the most &quot;trouble with value.&quot; It should be noted that Rob Arnott, in his construction of a more &quot;pure&quot; S&amp;P index in that really really awesome paper he wrote a while back, P/S simply wasn&#39;t as good a representative index than P/E or P/Cash Flow, if I remember correctly. So this coupled with the lack of consistency mentioned in (1) lead me to wonder whether things are necessarily as bad as they appear for value.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;strong&gt;Fact #3: Given the recent major outperformance of value relative to growth, value may not have all that much more room to outperform, and indeed may underperform if history is a guide for the future.&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;I do agree with their main hypothesis, which can basically be summed up with a few more bullet points.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Even though value has outperformed growth by 2.2% on average over the past 26 years (the history of the R1000V and R1000G), &lt;strong&gt;R1000V actually underperformed R1000G over the entire history as recently as 2000&lt;/strong&gt;!&lt;/li&gt;&lt;li&gt;Yes, this was due to there being a bubble in 2000. (This may take a couple read throughs) If one were to hold constant the P/S or the P/E of the value stocks divided by the P/S or the P/E of the growth stocks over the whole time period from the indexes&#39; inception through 2000, value would have actually outperformed growth. The reason is because this relative P/S or P/E measure contracted big time, causing much of the underperformance of value relative to growth from inception to 2000.&lt;/li&gt;&lt;li&gt;Historical P/S and P/E of value relative to growth implies value is 1.7 and 1 standard deviation expensive relative to growth. This doesn&#39;t bode terribly well for value relative to growth. The next bullet goes into some numbers.&lt;/li&gt;&lt;li&gt;If one were to take all value over all years and bucket them by P/S and P/E, one could compare the returns over the following year for those stocks net of the return earned on growth stocks over that same year. If one were to do so, one would get a graph as per the one below: &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;img style=&quot;DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center&quot; alt=&quot;&quot; src=&quot;http://photos1.blogger.com/blogger/3390/1312/400/decile%20of%20valuation.jpg&quot; border=&quot;0&quot; /&gt;&lt;/p&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;What this says, for example, is that as one goes from the lowest decile of valuation (the lowest P/E or P/S bucket) to the highest, the outperformance of value relative to growth decreases. In the 10th bucket, the outperformance disappears when valuation is measured by P/E and goes negative by P/S! So they say this bodes poorly. And indeed, intuitively it does.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Comments&lt;/strong&gt;: &lt;/p&gt;&lt;p&gt;Honestly I don&#39;t have all that many beyond the inconsistencies and the journalistic concerns mentioned above. The only additional point I might add is that R1000V and R1000G are large cap indices. I would be interested to see how a more total market index plays out, as well as the numbers for small cap portfolios. &lt;/p&gt;&lt;p&gt;Do I believe the value premium has vanished? Nah. But some additional points do merit making. &lt;/p&gt;&lt;p&gt;As GMO mentioned, growth is more volatile and has a higher beta than value. If we have another tech bubble in 2006 (haha yeah right) and all stocks happen to go up like crazy, ah well. Value will underperform but there will be returns to be had. Small loss on an absolute basis. However if the market tanks or treads water, do I want to be in growth relative to value? While I don&#39;t have the numbers in front of me, my gut says that value tends to outperform relative to growth in bear markets because value is arguably less susceptible to multiple contraction. This would imply that from a defensive standpoint in this scenario, value would outperform. &lt;/p&gt;&lt;p&gt;Maybe I am fooling myself, but I tend to prefer the risk-reward characteristics of a value-biased portfolio. There are psychological tricks I think we investors are subject to when looking at relative studies. &lt;/p&gt;&lt;p&gt;Relative studies have a difficult time judging absolute performance. &lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112719566415705890/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112719566415705890' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112719566415705890'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112719566415705890'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/commentary-on-trouble-with-value.html' title='Commentary On The Trouble With Value'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112702928061104642</id><published>2005-09-18T00:33:00.000-07:00</published><updated>2005-09-18T01:13:19.986-07:00</updated><title type='text'>How Did Social Networks Become So Popular So Fast?  Some Thoughts.</title><content type='html'>One of the things I (along with numerous others I&#39;m sure) have been paying attention to is the white hot popularity of social networking sites these days. I am no expert and there are other sites like &lt;a href=&quot;http://www.minorityrapport.com&quot;&gt;Minority Rapport&lt;/a&gt;, written by my good friends Doug Sherrets and Jon Turow, which have done an excellent job of tracking their growth and evolution, I thought I&#39;d share some thoughts on why I think things have evolved in the direction they have. To put it simply, we have been bombarded by technologies which have allowed us to communicate increasingly easily with one another. It&#39;s only natural that our attention has turned to studying the dynamics of social networks and the rise of well constructed social networking websites. Society needs a structured way to leverage its newfound ability to communicate, and social networking websites offer us this leverage. In this context, it will be interesting to see how social networking continues to evolve. I offer some thoughts at the end.&lt;br /&gt;&lt;br /&gt;Below I expand on this idea with a network analysis twist.&lt;br /&gt;&lt;br /&gt;In the not so distant past, the primary means through which a person could connect with someone else was face-to-face conversation. This had a marked impact on the dynamic of a person’s social network—it was highly dependent on ones physical location. Because the typical person back then was also highly constrained in his or her ability to move from place to place, we had little ability to surmount geographic constraints. The social benefit to understanding social network dynamics was small because social networks were, simply put, not dynamic. Contrasting how things were with how things are leads to an important conclusion. The social benefit to understanding social network dynamics is heavily dependent on our ability to communicate with one another, and as a result, has been heavily driven by technological change.&lt;br /&gt;&lt;br /&gt;The advent of the phone technology eliminated the need to be within a stone’s throw of someone to communicate with them, increasing our ability to communicate. We could connect with important people we hadn’t even seen before as long as we knew their phone number (perhaps through someone in our social network!). The advent of transportation technology markedly increased our ability to communicate because of its ability to increase our geographic range of motion. The advent of email technology allows people to structure their thoughts in the form of a letter and send it to someone across the globe within seconds. The advent of instant messaging technology goes one step further, allowing people to have multiple interactive conversations with each other simultaneously. Because we have been bombarded by technologies allowing us to communicate increasingly easily with others, it is only natural that our attention has turned to studying the dynamics of social networks. Society needs a structured way to leverage its newfound ability to communicate.&lt;br /&gt;&lt;br /&gt;However, looking at individual technologies in isolation misses the lion’s share of how technological advance has aided communication which in turn has driven the importance of social networks. As Watts stated in “The Connected Age,” there is only so much which can be learned about the dynamics of a network from a study of the individual component pieces—one needs to think about the network dynamics as a whole. The same concept applies to how technology has driven the growth of communication. While it is true, for example, that cars increased our geographic range of motion, the coupling of cars with cells phones allows us to remain in touch with the people we meet in far-away areas when we return home. The same can be said of social networking websites. Users are far more interested in social networking sites because cars, cell phones, email and instant messaging services make it all the more easy for users to contact and communicate with the people they see on a website like facebook. The evolution of all these technologies in conjunction with one another has driven communication and the study of network dynamics far more than the component technologies could possibly explain in isolation from one another.&lt;br /&gt;&lt;br /&gt;What impact does all of this have on corporations? I think it makes a hell of a difference! Communication flow is something which can be monitored, and can lead to quantum leaps in corporate efficiency, IMHO. While individuals may have privacy concerns (and rightly so), there is a goldmine of information which can quite easily be made available to corporations who so desire to scrape it up.&lt;br /&gt;&lt;ul&gt;&lt;li&gt;IT crises are exacerbated by communication bottlenecks, so wouldn&#39;t it be helpful to know where those bottlenecks are most likely to occur, probabilistically speaking, by analyzing the network flow of emails to and from the IT department? &lt;/li&gt;&lt;li&gt;Stress testing with the proper communication monitors in place could allow corporations to simulate such crises, track the communication flow in real-time and improve corporate communication flow with a solid post mortem analysis of that communication flow. &lt;/li&gt;&lt;li&gt;Corporations could identify the communication gaps which may exist between it and other corporations. Knowledge of such gaps could be indicative of future problems or of potential vulnerability, and could be a stimulus for value-added change. &lt;/li&gt;&lt;li&gt;The list goes on and on. These are all changes that are most definitely possible now given the current state of technology. While no one may act on this technology as much as they could, it wouldn&#39;t surprise me at all if we were to see more of a concerted move in this direction at the expense of personal privacy. &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Social networks contain a wealth of valuable information. The scary part is that we must lay ourselves bare to unlock the value. Given how competitive the business world is right now, I am not too optimistic about the implications on privacy-- but hey, at least our economy may run more smoothly.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112702928061104642/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112702928061104642' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112702928061104642'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112702928061104642'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/how-did-social-networks-become-so.html' title='How Did Social Networks Become So Popular So Fast?  Some Thoughts.'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112616226177329560</id><published>2005-09-07T23:51:00.000-07:00</published><updated>2005-09-08T00:57:15.533-07:00</updated><title type='text'>WisdomTree Investments: September 9th 2005 Update</title><content type='html'>&lt;span style=&quot;font-family:Georgia;&quot;&gt;I will get back to Lo&#39;s paper soon, but I just thought I would update my &lt;/span&gt;&lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/08/taking-look-at-index-development.html&quot;&gt;prior post&lt;/a&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt; on IXDP, Index Development Partners, WisdomTree Investments or whatever else you want to call it. &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt; &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Today, they announced the hiring of Richard Morris as the Deputy General Counsel (article &lt;/span&gt;&lt;a href=&quot;http://biz.yahoo.com/bw/050907/75256.html?.v=1&quot;&gt;here&lt;/a&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;).  This new addition interests me, because he seems to fill part of the hole I mentioned in my prior post; that is, regulatory issues and concerns.  Morris was senior counsel at Barclays as Barclays went out to launch its very first iShare, which has since become the 800 lb. gorilla in the ETF market. His experience at the SEC further reinforces the unique regulatory skill-set he can bring to the table at IXDP.  Putting it all together, their management team is now as follows: &lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;CEO: Jonathan Steinberg &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Chairman: Michael Steinhardt  &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Director of Fund Services: Michael Jackson &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;CFO: Mark Ruskin &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;ETF Distribution: Ray DeAngelo &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Senior Investment Strategy Advisor: Jeremy Siegel &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Senior Analyst: Jeremy Schwartz &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Deputy General Counsel: Richard Morris &lt;/span&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Board of Directors: Jeremy Siegel, Frank Salerno, James Robinson IV, Michael Steinhardt, Jonathan Steinberg&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Looked at another way, they now have 7 senior managers.  Two deal primarily with &lt;/span&gt;&lt;strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;general operations &lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;(Steinberg and Ruskin).  One deals primarily with more &lt;/span&gt;&lt;strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;ETF-specific operations &lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;(Jackson).  Two are solely geared towards the &lt;/span&gt;&lt;strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;research and development &lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;of innovative indexes (Siegel and Schwartz).   One will deal primarily with the legal and regulatory issues associated with &lt;/span&gt;&lt;strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;ETF sponsorship &lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;(Morris).  One is geared primarily towards &lt;/span&gt;&lt;strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;marketing &lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;newly sponsored ETF’s to various clients and platforms—brokerages, retirement platforms, individual investors, hedge funds, mutual funds (DeAngelo).  Thus, the management team seems to flow from the index creation process all the way to the marketing of funds to a wide array of investors.  &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Key take-aways to me at this point:&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;Regulatory concerns seem less of a constraint to me than they did pre-Morris. &lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-However I am still confused as to how they can go about expediting the sponsorship process. &lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;IXDP’s management team has far more depth and breadth than that of PowerShares, which may allow for more explosive growth post-sponsorship than PS could ever dream about. &lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-PS now has around $500M AUM, 4 partners and around 8 employees. It has 4 ETF’s and 24 awaiting approval (article &lt;/span&gt;&lt;a href=&quot;http://moneycentral.msn.com/content/specials/P109367.asp?special=0406etfs&quot;&gt;here&lt;/a&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;).&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-Its head portfolio manager doesn’t have quite the same reputation as Jeremy Siegel. &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-PS’s distribution and marketing capabilities seem relatively constrained. &lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;My initial estimates for cost were no good; way too low.  &lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-My gut is saying that they’ll need a lot more than $4M in steady state to run the operation they’re looking to run.  The size and stature of the management team implies very ambitious plans. &lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;The kicker, it seems, is whether or not enhanced indices will attain proof of concept.  And can IXDP pay the education costs necessary to spread the word, as BGI has (and PS currently isn’t)?&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-I am not yet sure PS has proven that enhanced indices ‘works’; will these indices really outperform over the longer term?  &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-PS doesn’t have the infrastructure to do the marketing necessary to educate consumers properly.  I don’t expect IXDP to get any real substantial spillover education benefits from PS. &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-BGI, as a case in point, has spent large amounts of money marketing its products—seminars, white papers, advertisements, and more (article &lt;/span&gt;&lt;a href=&quot;http://moneycentral.msn.com/content/specials/P111521.asp?special=0406etfs&quot;&gt;here&lt;/a&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;).  This is in addition to large sums of money they’ve spent to construct and rebalance the $115B AUM in the 99 ETF’s that currently trade under the BGI name.  BGI itself has around 2,000 employees. &lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;The future for IXDP still seems bi-modal to me. &lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-Cost structure getting large, will have to get larger. &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;-High education costs with little help at this point (unless they or their indices are bought out). &lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-family:Georgia;&quot;&gt;I haven’t had the time to do proper due diligence on just how costly this will be, but one might want to take a step back and think about just how many of the first movers marketing truly new products were the eventual beasts in the space they were moving to occupy.  BGI had marked advantages, most notably a large base of capital which it could fall back on to pursue a longer term goal.  Will IXDP, through Steinhardt and other financiers, be able to secure enough financing to do the same as an upstart with no parent? &lt;/span&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112616226177329560/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112616226177329560' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112616226177329560'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112616226177329560'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/wisdomtree-investments-september-9th.html' title='WisdomTree Investments: September 9th 2005 Update'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112607807164441174</id><published>2005-09-07T00:27:00.000-07:00</published><updated>2005-09-07T00:37:18.256-07:00</updated><title type='text'>When Andy Lo Says There are Large Systemic Risks in the Hedge Fund Space, It Might Be Time to Start Worrying-- Introduction</title><content type='html'>&lt;strong&gt;When Andy Lo Says There are Large Systemic Risks in the Hedge Fund Space, It Might Be Time to Start Worrying&lt;/strong&gt;&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Background on Andy Lo&lt;/em&gt;&lt;br /&gt;Let me preface by saying that Andy Lo is one of the smartest people in the financial world, IMHO.  A financial engineering professor at MIT, he has written numerous &lt;a href=&quot;http://web.mit.edu/alo/www/articles.html&quot;&gt;papers&lt;/a&gt; and &lt;a href=&quot;http://web.mit.edu/alo/www/books.html&quot;&gt;books&lt;/a&gt; on computational finance and financial engineering.  He and Wharton Professor MacKinlay co-wrote the famous paper a while back on the notoriously high serial autocorrelation of the market (back when that was actually a tradable phenomena, prior to the autocorrelation’s subsequent demise in absolute terms).  He’s currently at the helm of a $400M hedge fund, &lt;a href=&quot;http://www.alphasimplex.com/public/public/public.shtml&quot;&gt;AlphaSimplex&lt;/a&gt;.  And of course, he’s received numerous &lt;a href=&quot;http://www.fenews.com/fen26/lo.html&quot;&gt;awards&lt;/a&gt;. &lt;br /&gt;&lt;br /&gt;&lt;em&gt;Summary of Andy Lo’s Paper&lt;/em&gt;&lt;br /&gt;Anyways, he wrote an interesting paper back in August (you can read it &lt;a href=&quot;http://web.mit.edu/alo/www/Papers/systemic2.pdf&quot;&gt;here&lt;/a&gt;) regarding the risk/reward profile of hedge funds, on average, relative to traditional investments and the implications of that profile on systemic risk in the financial markets.  &lt;br /&gt;&lt;br /&gt;Specifically he creates metrics to track liquidity risk and the importance of leverage.  These are obviously highly tied to systemic risk—should a highly levered investment vehicle experience a sharp loss and the bank loaning the vehicle funds decides to retract some of that credit, the vehicle will be forced to liquidate positions he may not want to liquidate leading to some major market impact.  And all else equal, the less liquid the assets being invested in, the more market impact there will be.  That it is the essence of what happened to LTCM back in ’98.  Sharp losses, credit retraction, forced selling, market instability. &lt;br /&gt;&lt;br /&gt;So if hedge funds happen to be more highly levered and are investing in less liquid investments, all else equal, we may want to start worrying that the probability of an LTCM-type blowup will go up.  So do hedge fund returns correlate on the downside?  For the more sophisticated readers, you may want to skip the sections with heading “Basic Fact”—they don’t really bring much new to the table.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Basic Fact #1: Current Dynamic Risk Measurement is Lacking&lt;/em&gt;&lt;br /&gt;The first thing that Lo establishes is the inadequacy of many of the more common risk metrics, especially when they are evaluating active trading strategies.  He poses the question—imagine someone were to come to you with a hedge fund that has an average monthly return 2.6x that of the S&amp;P while “risk” as measured by standard deviation is only 1.6x that of the S&amp;amp;P, with 6x less down months and twice the sharpe ratio of S&amp;P and only 60% correlation to the S&amp;amp;P over a 7 year period (1992 to 1999), would you seriously consider investing in that fund?  Well, a simple strategy that happens to match that payoff profile is, simply, selling puts on the S&amp;P according to a simple rule.  And while no fund would actually go about doing exactly that, there are very creative ways they can do exactly that so that no one knows what the hell they’re doing.&lt;br /&gt;&lt;br /&gt;Obviously Lo is hitting on small sample bias in the presence of trading strategies with high skew and kurtosis (tail risk; a strategy that typically has many small positive payoffs and a few really big negative ones).  Taleb has been preaching this for years.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Basic Fact #2: Downside Correlation Can Cause Market Neutral Hedge Fund Return Correlation to Go from 1% 99.9% of the Time to 99% .1% of the Time.&lt;/em&gt;&lt;br /&gt;Lo calls it phase locking risk and explains it very simply and elegantly by taking into consideration two hypothetically market neutral hedge funds.  I’m too lazy to write out the math but the key takeaways are as follows.&lt;br /&gt;&lt;ol&gt;&lt;li&gt;During times of market stress, market neutral funds which ordinarily have arbitrarily low correlations to one another can experience arbitrarily &lt;em&gt;high &lt;/em&gt;correlations. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;Small sample bias again discourages proper estimation of the “true” statistical properties of the moving parts involved. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;In fact, the inherent non-stationarity of real-world processes can completely preclude proper estimation of conditional downside volatility and probability (this is just my opinion). &lt;/li&gt;&lt;br /&gt;&lt;li&gt;In any case, more sophisticated risk metrics are needed in the face of basic fact #2, which must be able to measure the non-linear impact of having, say, heavy credit or sector exposure.  Or how about capturing the systemic risk of investing in an emerging markets fund and a fixed income fund relative to investing in 2 market neutral fixed income funds. &lt;/li&gt;&lt;/ol&gt;&lt;br /&gt;&lt;em&gt;Non-Basic Fact #1: The Dynamics of Hedge Funds Do Indeed Differ from Traditional Investments&lt;/em&gt;&lt;br /&gt;A huge number of studies have been done and have come to the following tentative conclusions:&lt;br /&gt;&lt;ol&gt;&lt;li&gt;Hedge fund returns have abnormally high positive serial autocorrelation.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;“Market neutral” hedge funds may not be all that market neutral when one moves away from ‘beta’ towards a perhaps more applicable measure of market risk. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;Hedge fund performance is indeed inversely proportional to size.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Operational risk (fraud in particular) is the primary cause of hedge fund blow-ups.&lt;/li&gt;&lt;/ol&gt;The list goes on.  The point is that one doesn’t typically see these sorts of characteristics in traditional investments.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Statistical Analysis of Hedge Fund Returns Databases:&lt;/em&gt;&lt;br /&gt;Lo then goes into some truly very interesting hedge fund returns EDA (exploratory data analysis).  Below are some of the truly cool statistical facts from within the CSFB/Tremont Indexes:&lt;br /&gt;&lt;ol&gt;&lt;li&gt;Historical average returns vary widely between strategies, with dedicated short sellers on one end of the spectrum at -0.69% and global macro at the other end with 13.85% (the latter fact surprised me greatly!). &lt;/li&gt;&lt;br /&gt;&lt;li&gt;Historical correlations with S&amp;P also vary widely between strategies, with Long/Short Equity funds on one end at 57.2% (this seems dangerously high) and dedicated short sellers at the other at -75.6%. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;Rolling correlation is on the rise for multi-strategy HF’s and fund of funds, which makes sense—as assets under management goes up, it becomes increasingly hard to &lt;em&gt;not &lt;/em&gt;be like the market! &lt;/li&gt;&lt;/ol&gt;&lt;br /&gt;On to the main event—can we measure ‘hidden’ exposures like downside correlation risk, fat tail risk and illiquidity risk?  The summary and my thoughts on his results and their implications to come soon.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112607807164441174/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112607807164441174' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112607807164441174'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112607807164441174'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/when-andy-lo-says-there-are-large.html' title='When Andy Lo Says There are Large Systemic Risks in the Hedge Fund Space, It Might Be Time to Start Worrying-- Introduction'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112579135089951646</id><published>2005-09-03T15:49:00.000-07:00</published><updated>2005-09-03T16:49:10.916-07:00</updated><title type='text'>Thoughts on the Nature of Good Analysis</title><content type='html'>Sorry for the lack of posts; I&#39;ve been a little busy.  I&#39;ve been doing a lot of thinking about the nature of good analysis, and the pros and cons of being systematic relative to a more unstructured analysis.  I&#39;ve reached a few tentative conclusions.&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;As I&#39;ve said before, I believe the rational paradigm is to &lt;strong&gt;be systematic when it&#39;s applicable to be systematic&lt;/strong&gt;.  Emphasis on applicability.  Just because I have a certain skillset doesn&#39;t mean that that skillset will actually be useful in all contexts!  That is most definitely true with value investing.  &lt;/li&gt;&lt;li&gt;As Charlie Munger famously said, &lt;strong&gt;it really helps to have a lattice to structure the information that you take in&lt;/strong&gt;.  The information &quot;sticks&quot; better as a result and it opens the door wide open to levels of analysis that are inconceivable under another approach.  For example, lets say you&#39;re looking at some company&#39;s balance sheet and you see that they have x square feet in land on their books.  How do you process that data point?  Well, it&#39;d probably be more useful to consider that in the &quot;breakup value&quot; paradigm and not really a DCF standpoint.  Or when I&#39;m looking at stock prices, what information is there to be gained from that?  Maybe it might be helpful to see how correlated your stock is to other stocks and to the overall market. The whole point is that it really does help to have those paradigms-- that latticework-- in your head so that you can turn that data into real usable information. It&#39;s very helpful to build the proper paradigms for thought.&lt;/li&gt;&lt;li&gt;There are indeed benefits to wading through information in an unstructured way.  Even if at this point in time, we have a pretty good general idea of how we should be processing information, things change.  What was important yesterday may not be quite as important today.  Or entirely new paradigms may form. All this implies that it might be a good idea to always keep an ear to the ground and scour through bucketloads of information that may or may not be all that helpful, just to make sure that you haven&#39;t overlooked something which may be of the utmost importance. &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;IMHO, thinking about how exactly we should be processing data is extremely useful.  Have you ever had that feeling after reading every article in a magazine or newspaper that it all simply went in one ear and out the other, and none of the information really stuck?  I sure have.  Useful paradigms are the solution.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112579135089951646/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112579135089951646' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112579135089951646'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112579135089951646'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/09/thoughts-on-nature-of-good-analysis.html' title='Thoughts on the Nature of Good Analysis'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112504214515418383</id><published>2005-08-26T00:07:00.000-07:00</published><updated>2005-08-26T00:42:25.170-07:00</updated><title type='text'>On the Nature of Outliers: Quant Vs. Fundie Analysis</title><content type='html'>I&#39;ve been thinking a lot about the implications of one of my prior posts, &quot;&lt;a href=&quot;http://thelearningblog123.blogspot.com/2005/08/useful-applications-for-quantitative.html&quot;&gt;Useful Applications for Quantitative Ability with Fundamental Analysis&lt;/a&gt;.&quot;  I think there are a few things worth mentioning about the nature of residuals versus outliers, and how that plays into this whole schema I&#39;ve written about god knows how many times.&lt;br /&gt;&lt;br /&gt;The bottom line:&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Quants may toss outliers to denoise their data so that they can properly estimate the &quot;true&quot; relationship between two variables.  Once they have established the &quot;truth,&quot; they can simply trade the noise.  That is the game of tons of prop trading shops, and it does make some sense. I have done it myself while at a big bank. And we traded a ton of bonds. &lt;/li&gt;&lt;li&gt;Fundamental analysts (&#39;FA&#39;s&#39;) actively seek those same outliers which were thrown out.  Rather than trade continuously, they sit on their hands in waiting most of the time.  And when those outliers surface themselves, the FA&#39;s put on their positions in size.  This also makes sense. &lt;/li&gt;&lt;/ul&gt;Those simple facts have huge implications on the applicability of quantitative methods in a qualitative setting!&lt;br /&gt;&lt;br /&gt;Sure, I could de-noise my time series prior to calculating rolling correlations of every stock on every other stock, and sure I could calculate the correlations of the wavelet spectra, but while that may be more technically precise, it first of all dramatically increases the computational time. But even assuming computational time wasn&#39;t an issue, it&#39;s not really hitting at the point. &lt;br /&gt;&lt;br /&gt;Traditional correlation and the correlation of wavelet spectra are not orthogonal concepts.  They are generally jabbing in the same direction.  If that is true, then turn to what the goal of analytics are in a deep value setting, and what deep value investors are attempting to do.  They are attempting to find situations which are completely out of the ordinary, and are content on sitting on their hands until they are able to find such a situation. &lt;br /&gt;&lt;br /&gt;If I am looking for a situation that is truly out of the ordinary, then statistics and hardcore mathematics will not help me 99% of the time, because we aren&#39;t trading &lt;em&gt;noise&lt;/em&gt;, we are trading &lt;em&gt;outliers&lt;/em&gt;.  Whatever intuitive concept I am trying to pick up with statistics would have to be so extraordinary that at that point, any statistic generally pointing in a similar direction should be flashing red lights!&lt;br /&gt;&lt;br /&gt;I know that a lot of times, a good investment comes as a result of many small oddities lumped on top of eachother.  In this sort of situation it does help to have the additional precision.  But the driving notion is to keep in mind the nature of the diminishing returns due to precision in a value framework.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112504214515418383/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112504214515418383' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112504214515418383'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112504214515418383'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/08/on-nature-of-outliers-quant-vs-fundie.html' title='On the Nature of Outliers: Quant Vs. Fundie Analysis'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-14484258.post-112495631174976714</id><published>2005-08-25T00:51:00.000-07:00</published><updated>2005-08-25T01:10:26.756-07:00</updated><title type='text'>Taking a Look at Index Development Partners</title><content type='html'>&lt;strong&gt;Taking a Look at Index Development Partners (&lt;/strong&gt;&lt;a href=&quot;http://finance.yahoo.com/q/bc?s=IXDP.PK&quot;&gt;IXDP.PK&lt;/a&gt;&lt;strong&gt;)&lt;/strong&gt;&lt;br /&gt;Below I take a look at IXDP and how it fits in the ETF industry. I conclude that while it’s not a deep value investment, it at the least is an interesting stock. At its current valuation though this is kinda ridiculous, unless they do something non-ETF related. I start with a write-up I did back in April 2005 on the ETF industry in general with a focus on equity ETF’s and IXDP in particular. At the bottom I update the situation briefly.&lt;br /&gt;&lt;strong&gt;&lt;em&gt;Size:&lt;/em&gt;&lt;/strong&gt;&lt;br /&gt;Total ETF assets accounted for &lt;strong&gt;$226.21 billion &lt;/strong&gt;at the end of 2004, a 49.8% increase over the level of the previous year, according to the ICI.&lt;br /&gt;&lt;strong&gt;&lt;em&gt;Growth Prospects:&lt;/em&gt;&lt;/strong&gt;&lt;br /&gt;Some industry experts said it will be hard for ETFs to keep up that kind of growth without new products. Unfortunately, most equity indexes are taken, which means that it will be difficult for ETF providers to come out with new domestic-equity funds. But new products will come. The difference is, some will probably have an actively managed flavor. The first steps towards actively managed ETFs are going to be enhanced indexing, which is where Index Development Partners (IXDP.PK) enters the picture.&lt;br /&gt;&lt;em&gt;Comparable:&lt;/em&gt;&lt;br /&gt;PowerShares Capital Management is one company which constructs enhanced indexes. It currently has seven new enhanced indexes based on Intellidex, a quantitative methodology, and now has around $500M in assets (PowerShares is planning to release a few new ETFs over the summer, some of which are based on Intellidex and some of which are going to be more ‘traditional’ they say). Rydex is involved with more passive strategies as well, but it has some pseudo-active strategies too. Its RSP S&amp;P Equal Weight Index (rebalancing periodically) currently has ~$760M in assets.&lt;br /&gt;&lt;em&gt;Avenues for Growth:&lt;/em&gt;&lt;br /&gt;Industry analysts, however, stressed that while steady streams of new products are expected, they aren&#39;t necessary for the industry&#39;s assets to increase. While ETFs grew tremendously last year, total assets are small compared with the more than $8 trillion in mutual funds. If one holds the supply of wealth fixed, this means one big potential source of asset growth comes from taking sales away from the mutual fund industry.&lt;br /&gt;The key to capturing more assets is education. Another potential key is the inclusion of ETFs on retirement platforms. Thus, growth is as much an exercise in marketing and business strategy as it is one in quantitative finance. Michael Steinhardt has specifically stated his interest in targeting all the important constituencies—brokerages, retirement platforms, individual investors, hedge funds, everyone.&lt;br /&gt;&lt;strong&gt;&lt;em&gt;My Spin:&lt;/em&gt;&lt;/strong&gt;&lt;br /&gt;Investment strategies can be broken down into three broad categories—passive, pseudo-active, and active. The passive category has been largely exhausted. BGI was the victor in this field with its portfolio of iShares. There may still be some room for growth in passive bond and international strategies, but passive domestic equities are pretty much entirely covered. Active strategies are more the domain of hedge funds, which allow for complete investment flexibility, or other investment vehicles. Active strategies would be a difficult market to enter because it is highly competitive.&lt;br /&gt;However the same isn’t necessarily true for pseudo-active strategies. What puts them in a unique competitive position is two-fold:&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;They are active enough to “fine tune” passive index investment, potentially augmenting the risk-return characteristics of the investment with simple generally quantitative rules.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;They are passive enough to avoid the often onerous expenses charged by hedge fund and mutual fund managers alike. &lt;/li&gt;&lt;/ol&gt;There are currently two big players in the pseudo-active ETF market segment—PowerShares and Rydex (only some of Rydex’s portfolios however). While it is true that Barclays, Vanguard, and State Street hold the lion’s share of assets in the ETF market overall, pseudo-active strategies are fundamentally of a different type than the sort of ETFs currently being offered.&lt;br /&gt;&lt;strong&gt;&lt;em&gt;The Business Model:&lt;/em&gt;&lt;/strong&gt;&lt;br /&gt;So this is the bottom line for ETF success as a business, as far as I can see it. Their main goal is to get huge amounts of investment in their funds so that they can collect the expense fee. ETF companies usually have a wide array of funds, which leads me to wonder what the costs/requirements are to registering an ETF. Whatever the requirements are, by casting out a wide net of distinct ETF’s, the ETF companies can get many disparate investment groups to invest in their products who wouldn’t have done so otherwise. The shotgun approach is probably a solid way to grow assets in the long run.&lt;br /&gt;Cases in point:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Macro hedge funds and quant funds are heavy players of SPX and other passive index ETF’s, for obvious reasons. Liquidity is already huge so ETF’s are in some ways able to capture the oft cited “hedge fund wave.”&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Hedge funds and individual investors can make sector specific bets with iShares, so BGI has created a ton of sector specific iShares. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;Individual investors wanting broad exposure to the markets without (1) getting charged like crazy for investing in many disparate stocks, and (2) needing to do DD on what stocks lead to the most “representative” mix to get proper diversified exposure.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Retirement planners can just tuck money away in ETF’s instead of more expensive mutual funds, or more risky actively managed investments. &lt;/li&gt;&lt;/ul&gt;&lt;em&gt;Expense Ratio Case Study—Rydex:&lt;/em&gt;&lt;br /&gt;Rydex generates revenues daily from its expense fee. As an example, take the Rydex equal-weighted S&amp;amp;P tracker (RSP). It has $765M under management and charges 40bps in the following way—Monday through Thursday count as 1 day, and Friday counts as 3. So they get .4%/365 of total assets on Monday through Thursday, and 1.2%/365 on Friday. So that fund generates around $3M in revenues annually, spread evenly throughout the year. Not much, but first of all, Rydex has $10B under management. Also, just imagine Barclays with its $110+B in assets, charging more than normal (the majority charge around 70bps), pulling in a steady $770M. Granted, they probably need to spend a good amount on transaction costs, but for crying out loud, they are doing passive indexing, and SPY is able to charge a meager 11bps! I would be surprised if they are paying more than 15bps, because even the SPY is generating a profit. This would mean that with the bulk of Barclay’s iShares, they are probably making around 55bps, implying pre-tax profits on the order of $605M.&lt;br /&gt;There is one other cost which should be mentioned, licensing fees. When a company launches an ETF tracking a particular index, say the S&amp;P, the ETF company will have to enter into a licensing agreement with S&amp;amp;P. This cost will probably be in terms of basis points. Vanguard ran into problems because of this back in 2001, as it was first launching its VIPER ETFs, which were referenced off of the S&amp;P500. They believed that their existing agreements with S&amp;amp;P were enough, and further licensing agreements weren’t needed. S&amp;P disagreed. Having a cheap expense ratio was of crucial importance to Vanguard, which is why this point ended up being hotly contested—Vanguard didn’t want to have to mark up their expense ratio by another handful of basis points.&lt;br /&gt;Now things are quite different for enhancement strategies, because their stated goal is not solely to be representative of an index, or to have a broad exposure to something or other. If all you wanted was an ETF which has broad exposure to something or other, there is probably a passive ETF trading with a cheaper expense ratio right now. The draw to enhancement strategies lies in the potential for the 100-200bp potential upside relative to the reference index, accepting the sad fact that the expense ratio will probably be higher than their passive counterparts. Before I go into potential markets, it’s probably of value to do some back of the hand valuation calculations using PowerShares, the only true enhancement-focused comparable in the market right now, as a comparable. PowerShares was founded in August 2002. It now has around $500M in assets and 11 publicly traded funds. Its 2 oldest funds are less than 2 years old. Bond hopes to have between $2B and $3B by year’s end. Similar to IXDP, PowerShares received $10M in venture capital this year. PowerShares charges a maximum expense ratio of .6% (yet again implying big profits to Barclays).&lt;br /&gt;IXDP now has a market cap of $10M. Assume that it makes a 40bp spread on {expense ratio – transaction costs}, an estimated 15bps below Barclays. Taking a look at operating expenses, before they stopped filing they were incurring around $310k in costs per quarter, or $1.24M at that run rate. Those are all probably research costs. When things start getting interesting, they will also be incurring a lot more business expenses—flying from place to place, lobbying to get advertising or to get on one platform or another—so the past is not a good predictor of the future in this case. Let’s say $3M steady state operating expenses just to throw out a number. If the above assumptions are true, they will need to have $750M under management to break even. $1B under management implies $1M in pre-tax profit. $10B implies $37M in pre-tax profit. Using PowerShares as a rough guideline, if IXDP successfully releases a few strong indices, it could hit $1B in a couple of years. IXDP has some superstar backing—the star power of the likes of Steinhardt, Steinberg, and Professor Siegel will be a plus when it begins marketing.&lt;br /&gt;So the big question is what constituents would want to get involved with enhancement indices. I have a fundamental belief (as a pseudo-efficient markets believer?) that mutual fund money will slowly begin turning to ETF’s, so I believe there will be money flow for good strategies in the coming few years. Beyond that, it’s probably helpful to consider money flow from the various market participants:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;I don’t see why anyone would short an enhanced index. If people were, I would start getting worried. So this eliminates all shorts (as a funny point of comparison, PowerShares touts that it can be sold short on a down tick—great…)&lt;/li&gt;&lt;br /&gt;&lt;li&gt;If costs are low, if the fund still retains its ability to track the S&amp;amp;P or any important index, and if liquidity is high, IXDP’s indices could get a lot of long money. If the above assumptions are true, then IXDP had better get portfolios out for all major indices!&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Steinhardt’s stated goal is 100 to 200bps over a reference index in the long run. This is too small for a long short after interest, so don’t expect anyone to put that trade on.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;It might be difficult to convince retirement platforms to consider IXDP because of the uncertainty associated with any form of active management.&lt;/li&gt;&lt;/ul&gt;I see big upside in Professor Siegel’s and Steinhardt’s ability to convince people that IXDP’s portfolios will be able to outperform the market on a consistent basis. Then anyone who wants to go long “the market” should consider IXDP’s portfolios as an alternative to its more traditional counterparts (ie. QQQQ, SPY).&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;Update&lt;/strong&gt;&lt;br /&gt;Since the time of writing the prior post, a few things have changed.&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;IXDP is changing its name to WisdomTree Investments, Inc. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;The stock is now trading at 3.95, and has just completed another round of equity financing. It now has &lt;a href=&quot;http://www.marketwatch.com/news/yhoo/story.asp?source=blq/yhoo&amp;siteid=yhoo&amp;amp;dist=yhoo&amp;guid=%7b497D3CBC-27D5-4501-982A-308F0F189D9B%7d&quot;&gt;94M shares&lt;/a&gt;, implying a market cap of $371.3M. Siegel and Steinhardt were among the buyers. Steinhardt&#39;s cash infusions make me feel a little more comfortable that this thing won&#39;t go under. &lt;/li&gt;&lt;br /&gt;&lt;li&gt;They’ve brought on board a few more people—Ray DeAngelo as the director of ETF Distribution, Michael Jackson as the new Director of Fund Services, and Marc Ruskin as the new CFO. They seem to have some pretty &lt;a href=&quot;http://biz.yahoo.com/bw/050726/265563.html?.v=1&quot;&gt;solid credentials&lt;/a&gt;. Finally, for those who have been paying attention, Wharton grad Jeremy Schwartz appears to have gotten a promotion. He is now a senior analyst at the fund. &lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Putting it all together, things are quite a bit different from the way they were.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Name Change:&lt;/em&gt;&lt;br /&gt;The fact that the company is changing its “strategic focus” from developing indices to being an asset manager interests and troubles me. Maybe it’s just me, but “asset manager” sounds quite… active. Perhaps more so than I would hope from a company whose prior investment thesis was built on the notion of creating a ‘small protected niche’ in the ETF space, creating and sponsoring innovative ETF’s. Does this imply that things are simply going so well on the index creation side that they are now focusing on higher goals without compromising the quality of their indices? From what I’ve seen and heard, this does NOT seem to be the case. But perhaps I’m reading too much into “asset manager”—perhaps they are just reinforcing the fact that ETF’s are a great asset management product for individual investors.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Market Cap:&lt;/em&gt;&lt;br /&gt;This thing is getting huge on no earnings. With the new full-time employees, my estimate of steady state expenses is probably on the low side. On the upside, it should be noted that in the latest equity issuance, Professor Jeremy Siegel and Michael Steinhardt were investors, although just how much wasn’t disclosed. For Steinhardt of course, this is peanuts. Even if he bought all of the 5.77M issued shares, that would amount to 10% of his existing stake. Steinhardt purchased his stake out of an equity issuance of 56.25M shares at $.16 and ended up with a 65.2% interest in the company (a 2370% return in 10 months... he hasn&#39;t lost his touch!).&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Re-Evaluating Costs: &lt;/em&gt;&lt;br /&gt;Costs prior to the discontinuation of their filings was on the order of $1.2M. I had allocated around $1.8M in possible future steady state annual expenses. With the addition of 4 executives and still nothing out yet, I could very well be undershooting it, because they haven’t had to build any infrastructure yet. While I have the utmost faith in Jeremy Siegel and Jeremy Schwartz, in all likelihood I expect they’ll need to hire a few more research assistants. If they do get an ETF off the ground, something tells me they’ll also need some more operations people. I’m tempted to peg expenses at around $4M to $5M. This implies they’ll need anywhere from $1B to $1.25B to break even. If PowerShares is any indication, a successful ETF or two could put them at around $1B within the next couple of years. $2B would put them at $3M to $4M in pre-tax profits. With a market cap of $371M, I am not too pleased.&lt;br /&gt;&lt;br /&gt;&lt;em&gt;Open Variables&lt;/em&gt;&lt;br /&gt;&lt;strong&gt;Star Power&lt;/strong&gt;: One open variable in all this of course is the star power of the management team and the experience of new executives. PowerShares is a bunch of people cooped up in a room in Chicago with seemingly few connections. WisdomTree will have far fewer frictions conditional on their release of a solid product.&lt;br /&gt;&lt;strong&gt;Registration Frictions&lt;/strong&gt;: From what I&#39;ve heard from competitors, it is no easy process to obtain sponsorship of an index, taking upwards of 2 years.  There are only around 7 companies with proper registration.  Even assuming that IXDP has an index right now and has already filed, this would be somewhat damning.  Perhaps IXDP has a work-around, but for now this should be a further point of caution.</content><link rel='replies' type='application/atom+xml' href='http://thelearningblog123.blogspot.com/feeds/112495631174976714/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/14484258/112495631174976714' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112495631174976714'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/14484258/posts/default/112495631174976714'/><link rel='alternate' type='text/html' href='http://thelearningblog123.blogspot.com/2005/08/taking-look-at-index-development.html' title='Taking a Look at Index Development Partners'/><author><name>Unknown</name><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry></feed>