This is a follow up to this previous post that dealt with why you need to be very careful when comparing manufacturer shipments to retail sales (also called consumption or takeaway or POS data). Once you are sure that 1. periods are aligned properly, 2. geography and product definitions are as close as possible and 3. […]

The post How to Calculate A Coverage Factor appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>This is a follow up to this previous post that dealt with why you need to be very careful when comparing manufacturer shipments to retail sales (also called consumption or takeaway or POS data). Once you are sure that 1. periods are aligned properly, 2. geography and product definitions are as close as possible and 3. unit of measure is the same between the shipments and retail sales, the last thing you need to do is calculate the coverage factor. This is a number that is used to adjust retail sales (usually) upward to account for any stores/channels that the manufacturer sells to but are not covered by the syndicated data supplier. Then you can look at shipments vs. retail sales and more clearly understand the inventory position of retailers which can help improve a manufacturer’s forecasting accuracy. (Note that if you are looking at retail sales data coming directly from the retailer itself, then a coverage factor is not necessary.)

The charts above illustrate a pattern where a coverage factor is necessary and appropriate. On the left, you see actual shipments and retail sales – note that the shipments (in red) are consistently higher than retail sales (in blue) so we know there must be shipment volume that is not accounted for in the outlets covered by the retail sales data. Although retailers often accumulate inventory over short periods (that’s another reason shipments can outpace retail sales) that excess inventory would eventually be sold through and the pattern wouldn’t persist for such a long time. On the right, you see shipments and adjusted retail sales, which is more likely to be the situation in real life.

Here’s a summary of the steps to calculate a coverage factor and create adjusted retail sales:

- Get correct data for shipments and retail sales
- Arrange the data in Excel to make the calculations easy
- Calculate rolling annual totals
- Calculate coverage factor
- Calculate adjusted retail sales

If you haven’t already, read this previous post for more detail on making sure the shipments and retail sales data are as “clean” as possible. You need AT LEAST one full year of monthly data and more data is better for this. Note: For a new product, you should not calculate coverage factor until it’s been selling at retail for *at least* year.

Arrange the original data so the period names are in column A, retail sales are in column B and shipments are column C, like this:

In columns D and E, calculate rolling 12-month retail sales and shipments. You would expect retail sales and shipments to be pretty close to equal over a long period of time but not on a monthly basis. Rolling means adding the 12 months of data, ending with the row you are in. So…in the row 14 for December, 2015, the rolling 12-month aggregate is the sum of January, 2015 through December, 2014. In row 15 for January, 2016 the rolling 12-month aggregate is the sum of February, 2015 through January, 2016. The first rolling 12-month total in this example has to be in December, 2105, since that’s the period that has 12 months of data to roll up. Copy the rolling 12-month formulas down for all the months that you have. (Note that if you have a database with 13 4-week periods in a year than you should use rolling 13-period aggregates. The point is to get to annual numbers.)

The chart below shows what the data looks like month-to-month. You can see that the shipments spike before the retail sales do. But you can also see that shipments are almost always higher than retail sales every single month, which would result in retailers building up lots of inventory over time.

And when you look at the rolling 12-month data, you can see that shipments are consistently higher than retail sales:

In column F, calculate the coverage factor for each month.

**coverage factor = rolling retail sales / rolling shipments**

(= col D / col E in the example)

The coverage factor for this whole dataset that contains 32 periods (January, 2015 to August, 2017) is 0.95, which is the average of all the coverage factors that we have. Even though there is a slightly different coverage factor each month, you want to use a consistent one for all periods in a given analysis. (I have seen too many cases where people adjust the coverage factor month to month in order to make the adjusted retail sales look like they want it to – please don’t do that!)

Here’s what the monthly coverage factors are for the example. This is very typical – they are in the mid-high 90s. Because they bounce around a little month-to-month we take the average to use to adjust the retail sales.

Now that we have the coverage factor, we can calculate the adjusted retail sales that are directly comparable to shipments.

**adjusted retail sales = retail sales / coverage factor for the whole dataset**

(= col B / 0.95, in this example)

Now when you compare the shipments to the adjusted retail sales, the long-term retail sales and shipments are more similar and the retail trade is building up inventory in some months and drawing down their inventory in other months.

If you have other specific questions, please ask in the comments to this post and I’ll respond here so other readers can also see the answers.

Did you find this article useful? Subscribe to CPG Data Tip Sheet to get future posts delivered to your email in-box. We publish articles about once a month. We will not share your email address with anyone.

The post How to Calculate A Coverage Factor appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>We’ve had several questions over the last few months regarding some facts that people are starting to notice are available in their Nielsen, IRI and/or SPINS databases. The names are very similar, but it’s not obvious how the measures are different or when you’d want to use one or the other. (With apologies to Homer […]

The post What’s With All These Similar Distribution Measures?? (Max, Reach, Average Weekly, etc.) appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>We’ve had several questions over the last few months regarding some facts that people are starting to notice are available in their Nielsen, IRI and/or SPINS databases. The names are very similar, but it’s not obvious how the measures are different or when you’d want to use one or the other. (With apologies to Homer Simpson, who probably never had to use a Nielsen or IRI database!)

Since these potentially confusing facts are all related to or derived from ACV, you may want to read these previous posts that talk about ACV and other Distribution measures:

- All About ACV
- The 2 Most Important Measure %ACV Distribution, Part 1 and Part 2
- Total Distribution Points: Master of All Distribution Measures

Now that you’re an expert on ACV, Average Items and Total Distribution Points (TDP), I’ll get to the point of this post!

First of all, as with lots of terminology in the world of CPG data, there are differences between the different data suppliers. If you have access to Nielsen data, you may have seen many facts with the word “Reach” in the name. If, however, you are an IRI client, then the equivalent term in fact names is “Max.” Note that if you are a SPINS client and get any data for the Conventional channel then you will also see “Max” in some fact names (because IRI actually supplies the Conventional data for SPINS). You may see the word “Avg Weekly” in front of distribution measures from all 3 suppliers. **Warning:** The terminology in this post may not be exactly correct here for all suppliers, so check your database for what’s there and ask your client service rep if you have specific questions about your database.

__2 Versions of Each Measure__

Let’s look at some real data for the measure “%ACV” for one brand in one retailer. Here’s data for 4 different periods – 1-week, 4-week, 12-week and 52-week:

Brand A in Simon’s Supermarkets

**Comparison of %ACV Measures**

Periods ending 10/28/17

Things to notice about the numbers:

- For a single week, the 2 measures are always the same – in this case 78.
- For all periods longer than 1 week, the measure on the right is larger than the one on the left – for 4-weeks 82 vs. 76, for 12-weeks it’s 89 vs. 81 and for 52-weeks it’s 93 vs. 83.
- The values of the measure on the left are all pretty similar – ranging between 78 and 83.
- The measure on the right gets bigger the longer the period – 78 for 1-week up to 93 for 52-weeks.

Using the 52-week period as an example, the key concept here is:

So over the course of the whole year, Brand A sold, on average, in 83% of Simon’s Supermarkets in a week but sold in 93% of Simon’s Supermarkets *at some point during the year.*

** **Here’s a very extreme example: Say you’re looking at a 52-week period. If the product sold in 20% of the ACV during the first week of the period, in 10% the 2nd week, 5% third week and then 0 the other 49 weeks then Max ACV for the product would be 20 but average weekly ACV would only be 0.7 (= (20+10+5)/52 wks).

__Which to Use When?__

This basically depends on a combination of:

- The purchase cycle for the product
- Where the product is in it’s life cycle – new, stable, declining

As we saw in the graph above, the longer the period gets, the more different the 2 measures will be. That means if a product has a long purchase cycle then the avg weekly and Max ACV will be more different – it’s possible for an item to sit on the shelf and not sell in every single week if people don’t buy the category that often. This is less of an issue for refrigerated products with a shorter shelf life and more of an issue for things like cleaning products that people may only buy a few times a year.

If a product is steadily gaining or losing distribution over a period, then be careful about which measure you use! Using Max would be better for a new product that had much lower distribution at the beginning than at the the end of a period. But looking at Max for a product that has been discontinued or delisted would be artificially high since distribution would be much lower at the end than at the start of the period.

__Same Concept for Other Distribution Measures__

Since many other measures are derived from ACV, they also have 2 different versions. Here are examples for Avg # Items and TDPs:

**Brand A in Simon’s Supermarkets
**

These same principles hold for Avg # Items and TDPs:

- For a single week, the 2 types of measures are always the same
- For all periods longer than 1 week, the measure on the right is larger than the one on the left
- The values of the measure on the left are much closer together than those on the right
- The measure on the right clearly increases the longer the period

Here’s how to describe what’s happening with Avg # Items over the last year using the data above: On average over the last year, Simon’s Supermarkets sold between 15 and 16 items every week. Between 19 and 20 items sold at Simon’s Supermarkets sold *at some point* during the last year.

I hope this helps to clear up the differences between the versions of the distribution measures.

Did you find this article useful? Subscribe to CPG Data Tip Sheet to get future posts delivered to your email in-box. We publish articles about once a month. We will not share your email address with anyone.

The post What’s With All These Similar Distribution Measures?? (Max, Reach, Average Weekly, etc.) appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>We’re delighted to once again have Mark Laceky, President of mLogic Consulting as guest blogger. mLogic’s previous post explained one of their favorite measures, Base Weighted Weeks (BWW). This follow-up post gives a concrete example of how to use BWW and includes an Excel-based tool that you can download and use yourself. Keep in mind that this […]

The post Base Weighted Weeks (Part 2 of 2): What Will Happen to Sales If… appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>*We’re delighted to once again have Mark Laceky, President of mLogic Consulting as guest blogger. mLogic’s **previous post* *explained one of their favorite measures, Base Weighted Weeks (BWW). This follow-up post gives a concrete example of how to use BWW and includes an Excel-based tool that you can download and use yourself.*

*Keep in mind that this post falls into the “Advanced” category! If you are fairly new to the CPG industry and/or to syndicated POS data, then you may want to check out some of our other posts. If you have any questions on this specific post or the Excel file, PLEASE contact mLogic directly via email at info@mlogicconsulting.com. And now, on with the show!*

**The Only Constant Is Change**

As any Marketer and Analyst can tell you, plans change. They change all the time. And the people who change them want to know (yesterday) what the sales impacts are likely to be based on those changes. Or worse – they have 18 ideas about changes they *might* want to make and they want someone (you) to estimate the impacts of each. What if we raise price and spend back? What if our trade budgets get cut 20%? What if we add a feature week? What if we promote with deeper discounts? How much would it drive sales if we got more displays?

If you’re the person responsible for providing answers/estimates on the fly you’ve probably devised some sort of spreadsheet to help you do the calculations, right? It has some syndicated POS data in it (baselines and base prices? incremental volumes and promoted prices? current distribution? category trend?). And then there are all those other business drivers that you *can’t* get from your syndicated POS Nielsen or IRI database – consumer promotion, advertising, package changes, etc. Of course, there are also lots of assumptions that make the whole thing work, right?

We can’t help you estimate the impact of changes to *all* of the drivers, but we *can* help you with the price and promotion end of things – using data that is available in your Nielsen or IRI database. (Note: If the mention of business drivers sounds familiar, you may have read about them in the series of posts on Volume Decomposition in the CPG Data Tip Sheet. This first of the 6 posts gives an overview of how to attribute past sales to the various business drivers and Part 6 talks about the “other drivers.” While a volume decomp analysis looks *backwards *to explain what happened, the methodology and tool in this post looks into the *future* to estimate what is likely to happen using similar concepts.)

**Bringing It All Together – A Unified Approach To Estimating Sales Impacts**

Our previous article reviewed Base Weighted Weeks (BWW) as a better measure of promotion frequency with the added benefit that all the math works out to the last decimal point. This article (and the attached “prize inside”) will show you how to use BWW and some other concepts together to create a simple but powerful sales estimation tool to make your life easier (part of it anyway). *CAVEAT**: this approach is recommended only when you don’t have more powerful tools on hand (such as price/promotion models, marketing mix models, etc).*

**CLICK HERE to download the pre-built Excel tool that we call Impact Estimator. **It gets you from data pull to simulator and does all the math for you. Of course, if you’re curious about the formulas you can explore the tool. But for lighter reading, here are the key concepts:

- There is a
you should start using (1. Data Pull tab)*standard POS data pull template*- It has everything in it so you don’t have to keep going back to the well – pull data just once!
- Link all calculations in the other 2 tabs to this raw data pull to build your Impact Estimator

- Break all business driver inputs into individual pieces that are
! That is, turn each individual component into an impact*multiplicative**index*, then multiply all the indices together to get the net impact. - Start with
then move to*baseline volume drivers**incremental volume drivers.* - Baseline volume drivers are anything not related to in-store trade promotions. Base drivers include base price changes, distribution changes, changes to advertising, changes to consumer promotion, category trends, changes in competitive impacts, etc.
- You’re on your own estimating some of these impacts – the secret is to estimate them one by one. You can go back later and change your assumptions if you like.
- Example: category is growing at 4% so our brand will likely grow at 4% or an index of 1.04.

- There are three incremental volume drivers –
*promotion frequencies, mix by promotion type and discount*The tool automatically fills in the current BWWs and discounts for each of the mutually exclusive and additive promotion types and the user then inputs changes:*levels.*- TPR (a temporary price reduction with no ad and no display present)
- Feature Ad w/o Display (a feature ad with no display present)
- Display w/o Feature Ad (a display with no feature ad present)
- Feature & Display (a feature ad with a display present)

- The Impact Estimator tool automatically calculates a Promoted Price Elasticity (PPE) based on the % lift and % discount on TPRs and then calculates multipliers for ads and displays. The user then can input new promoted discounts and watch volumes change accordingly

**Example: Geography A, Brand X, Category Z, 1-Year**

Note that the tables shown below are all in the Simulator tab in the Impact Estimator Excel tool. Starting with Baseline Volume Drivers, the user enters inputs into yellow-shaded cells (rows 9-14):

In this example, we are assuming the category volume trend will be -1.3%, Brand X base price/EQ will go up +5.0%, % ACV Distribution will drop 5% (*not* points), average items carried will be up +10% and competitors will be doing something (trade promotion? distribution gain? advertising?) that will cause Brand X to lose 1% of its volume.

As noted in the tool, there are some assumptions being made:

- Base Price Elasticity (BPE) is a simple estimate of ¾ of the Promoted Price Elasticity (PPE) – this is a general rule of thumb. If you have a BPE from previous research use that instead.
- For % ACV Distribution we are assuming an 85% incrementality rule – that is, if we get 5% less distribution we’ll only see a 4.25% loss in volume (due to reduced cross-store cannibalization and assuming that stores that drop distribution don’t sell as fast anyway).
- For average items carried we’re assuming a 20% incrementality rule. That is, getting 10% more items on shelf (e.g. going from 10 to 11 where carried) will only increase volume by 2% due to the new item being a slower mover and cannibalizing from existing items on shelf.

The net result is a Baseline Volume % Change estimate of **-9.8%** (in cell I15).

Then we move on to the Incremental Volume (Rows 23-26). The starting rule is that (with no changes to the trade promotion plan) incremental volume would be down the same % as base volume (in this case -9.8%). Then we move to changing the trade promotion plan:

In this example, we’ve increased total promotion frequency from 17.0 BWW to 19.0 BWW by increasing the frequency of each of the four promotion types. We also increased the discount levels slightly for each promotion type. The result is that our incremental volume should go up by +14.1% (cell I30) instead of being down -9.8% due to the baseline dropping.

The net result (rows 32-39) of all the changes is that Total EQ Volume should drop **-5.6%**. And since the Average Price/EQ will only be up 2.2%, we would expect our dollar sales to be down **-3.5%**.

**The Upshot?**

This article and tool are meant to be useful by highlighting some key concepts and then bringing them all to bear inside a unified approach. Key ideas are:

- The right data pull means you only have to do it once
- Always handle base drivers separately from incremental drivers
- Break all of your assumptions into manageable, multiplicative pieces
- Promotion changes are very manageable using BWW, PPE and multipliers – which in turn are easy to glean from the data

We hope some or all of these concepts make it into your own sales estimation tool!

**Please direct comments and questions about this article directly to Mark.** His contact details are below.

*Mark Laceky is President and founder of mLogic Consulting, Inc. which specializes in consumer packaged goods (CPG) pricing and trade promotion strategy. He previously managed the Nielsen North America Price & Promotion analytics practice and spent years in analytical leadership roles at Kraft Foods and The Quaker Oats Company. He may be reached via LinkedIn or by email at info@mlogicconsuting.com.*

The post Base Weighted Weeks (Part 2 of 2): What Will Happen to Sales If… appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>Tip Sheet readers frequently write us and ask for suggestions on how to find training classes, what reading we recommend, and what other resources are out there to help them on their path to CPG Data Guru status. Here are our recommendations. We think our list is a lot more practical than Oprah’s and our favorite […]

The post CPG Data Tip Sheet’s Favorite Things appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>Tip Sheet readers frequently write us and ask for suggestions on how to find training classes, what reading we recommend, and what other resources are out there to help them on their path to CPG Data Guru status. Here are our recommendations. We think our list is a lot more practical than Oprah’s and our favorite things won’t clog up your basement or storage shed!

TABS Analytics has a treasure trove of resources. Founder and CEO Kurt Jetta’s content is innovative and always focused on what’s most actionable. Check out the Resources link (guides and webinars) and the Blog link (some overlap with the guides/webinars). In particular, look for anything tagged with business metrics or trade promotions. You are guaranteed to learn something – we always do. Here’s a links to two recent metrics focused webinars: Fact Based Selling and Awful Analytics (the latter focuses on metrics Kurt believes are hurting the industry). There are also great industry insights to be had here. Pretty much every part of the store is covered but TABS offers more detailed category insights in certain areas (e.g. Beauty, Vitamin, Personal Care).

Category Management Learning Forum primarily offers paid training but also has a great (free) blog and occasional free webinars. Sue Nicholls really knows her stuff on the ins and outs of syndicated data so look for those posts. As an example, here’s one on how to set appropriate thresholds for metrics.

IRI and Nielsen both publicly share insights using their data. Their focus isn’t on telling you how to use their data but you’ll see lots of examples of their data in action which you can then apply to your work. For example, see the 2016 New Product Pacesetters Report from IRI and an interesting publication from Nielsen called the Journal of Measurement Journal of Measurement

Check with your source for syndicated data first. They may have free or low cost training options you don’t know about.

Beyond that, we recommend the syndicated data classes from two category management training companies. We’ve reviewed a number of classes from both and give both companies a thumbs up! Online classes start at about $100.

**Other Formal Training**

In addition to classes in syndicated data, Category Management Knowledge Group and Learning Evolution provide many additional training classes, on topics that will help you bring more insight to your data analysis. For example: retailer POS data, retailer loyalty card data, planogram data, understanding shopping behavior, and general approaches to root cause analytics.

Forecasting: For those of you on the demand planning end of the CPG data spectrum, The Institute of Business Forecasting & Planning (IBF) runs sessions in a variety of locations (some of them taught by our very own Robin Simon). Online education is also available from IBF.

The American Marketing Association (AMA) sometimes offers relevant training (both the national organization and also local chapters). Data visualization, storytelling, writing and presentation skills are all education areas the AMA covers. Most of their training and webinars won’t be relevant but there’s some gold in there and sometimes it’s even free. Check out their Events & Training page and also subscribe to email updates from your local chapter.

Channel specific trade associations and publications are usually great resources both for data and for general industry knowledge. For example, for the food business, Grocery Manufacturers Association (GMA) has some GREAT white papers and Progressive Grocer is a go to source for news and data. There’s something for every channel and even many departments – find yours.

Fellow CPG data consultant Scott Sanders‘ blog Shelf Talk covers a variety of topics related to the industry and to analysis.

Kevin Gray‘s focus is more towards data science and statistics, which is a bit out of my wheelhouse, but maybe not yours. Plus his perspectives on the role of human judgement and thinking when working with stats/software/technology match mine 100%. I’m sure Kevin would agree with me that corporate CPG can never software/dashboard it’s way out of the need for investing in training people to think (no matter how much they wish they could and how much money they throw into software).

Profitero & Clavis both provide excellent content and data on e-commerce. Both these companies specialize in what I would call “e-commerce causal” providing information on rankings, pricing, and the competitive landscape online. They also have various tools and partnerships for sales tracking. In addition, both of them run webinars and offer free reports to help you get up to date or up to speed. A good place to start? Free Amazon Fast Mover reports from Profitero for your category.

Engage Consultants offers a great blog with high quality content focused on Shopper Marketing.

Morning News Beat provides a likely and readable daily email with curated CPG company headlines.

Retail Wire, Advertising Age, and Consumer Goods Technology are three sources I check regularly for industry news. I especially like Consumer Good Technology for its data oriented news and articles.

The classic book for marketing analysts and still a go to: *Marketing Metrics: The Manager’s Guide to Measuring Marketing Performance (3 ^{rd} Edition)* by Farris, Bendle, Pfeifer, and Reibstein. There are earlier editions of this with slightly different titles.

I love Cal Newport’s perspective on how knowledge workers can become more effective at their work. For a taste, take a look at this series titled Craftsman in a Cubicle. He’s also written the books *So Good They Can’t Ignore You* and *Deep Work*.

Avinash Kaushik’s blog Occam’s Razor is wonderful. His business is web analytics but everything he says about what a good analyst does (at a higher level, above the specifics of the metrics) is spot on.

Stephen Few is a pioneer is clear, simple communication of data. He is the ultimate chart cleaner upper. See some of his work at his Perceptual Edge site. His books are great.

I also found the book *Excel Dashboards and Reports for Dummies* by Michael Alexander to be very helpful to my thinking on how to organize and communicate data in an online format.

For Excel dilemmas, I recommend MREXCEL and chandoo.org. Both have great free resources (as well as paid options). If you can’t find it there, just start Googling – the answer is probably out there in some forum. For example, I know literally nothing about Excel macros (other than how to open one, save it, and run it later) but found and copied the code for a macro that turns every formula in a spreadsheet into values with one quick zap. Invaluable!

Please share your favorite resources in the comments below!

The post CPG Data Tip Sheet’s Favorite Things appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>We’re delighted to have guest contributors Mark Laceky (President) and Mike Fridholm (SVP, Client Service) from mLogic Consulting introducing one of their favorite measures. Keep in mind that this article is most relevant for use in more strategic planning and analysis, not necessarily for new users or with retailers. Their contact details can be found at […]

The post Base Weighted Weeks (Part 1 of 2): How Much Trade Promotion Support Are You Really Getting? appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>*We’re delighted to have guest contributors Mark Laceky (President) and Mike Fridholm (SVP, Client Service) from mLogic Consulting introducing one of their favorite measures. Keep in mind that this article is most relevant for use in more strategic planning and analysis, not necessarily for new users or with retailers. Their contact details can be found at the end of this article. *

In the beginning of the scanning data age, distribution and promotion support were calculated very simply. The number of stores carrying your product divided by the total number of stores in the marketplace gave you Percent of Stores Selling (PSS) as the key distribution measure. Similarly, promotional support levels were calculated simply as the number of stores promoting divided by the number of stores in the marketplace, giving you Percent of Stores Promoting (PSP).

This was good enough for a while but it had a major drawback – it assumed all stores were of equal importance. Put another way, it assumed that being carried or promoted in a small store was just as valuable as being carried or promoted in a bigger store. As readers of the CPG Data Tip Sheet know, %ACV is a better measure than PSS because you get more credit for being in bigger stores that have more traffic.

** **The most commonly used measure of the amount of merchandising support is

**There are limitations when using CWW!
**The chart below summarizes what these issues are:

**A Better Way – Base Weighted Weeks (BWW)**

We’d like to familiarize you with a calculated database fact known as *Base Weighted Weeks (BWW)*. Check with your data supplier to see if you have access to BWW on your database and, if so, where to find it. Similar to CWW, BWW weights the importance of each store promoting, but in a better way – BWW uses each promoted UPC’s* own base volume* in each store to weight each store’s importance to your promotion analysis.

Let’s take a look at a *very* simple example of one SKU (“Item X”) in a market that has only 2 stores over a 4-week period. (Note that this applies to all different product and geography levels that you might be looking at, from a SKU to a category and from an individual retailer within a market all the way up to a national channel.) You can see that while both stores have the same amount of Total ACV $ every week, Store #2 is much more important to Item X than Store #1 given its higher weekly base volume of 30 EQ (vs. 10 EQ):

Now let’s say that over a certain 4-week period Store 1 promoted during week 2 and Store 2 promoted during weeks 2, 3 and 4 and we want to know *how much support did Item X get in the Total Market for the 4-week period*:

The table below contains the data needed to calculate BWW. To cut to the chase…BWW is 2.5 while CWW is only 2.0 over the entire 4-week period we are looking at. BWW is higher because the store that accounts for more of Item X’s base volume is the one that promoted for 3 weeks instead of only one.

And here are the calculations:

Thus, the Base Weighted Weeks tell us that Item X received promotional support in stores representing 63% of its base volume versus only 50% of the total ACV, and 2.5 Base Weighted Weeks instead of the 2.0 ACV-based Cume Weighted Weeks out of 4 total weeks. BWW shows us that we received more real support than CWW suggests. (Note that both BWW and CWW are usually shown with one decimal place.)

**So What?**

OK – so it’s a little more accurate and you can use it with subtotals – is that all? Why would I want to go to the trouble of getting to BWW? ** The ultimate benefit of BWW is that it can be used as part of a predictive system**. BWW can be used as a component in a simple Excel-based simulation tool because

*Please direct comments and questions about this article directly to Mike. His contact details are below. *

*Mark Laceky is President and founder of mLogic Consulting, Inc. which specializes in consumer packaged goods (CPG) pricing and trade promotion strategy. He previously managed the Nielsen North America Price & Promotion analytics practice and spent years in analytical leadership roles at Kraft Foods and The Quaker Oats Company. He may be reached via LinkedIn.*

*Mike Fridholm is SVP, Client Service at mLogic Consulting, Inc. His prior experience includes leadership of the Kantar TNS North America Consumer Goods vertical as well as positions managing a Nielsen office, large IRI client relationships, and category management and brand management roles at The Quaker Oats Company. He can be reached at *mfridholm@mlogicconsulting.com* or on LinkedIn.*

The post Base Weighted Weeks (Part 1 of 2): How Much Trade Promotion Support Are You Really Getting? appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>Hello, fellow analysts! Ready to add another tool to your kit? This is the third and final part of my series on how to use TDP to enhance your work. If you are new here, and not yet familiar with TDP, start with this basic introduction or my first two examples on TDP and velocity and […]

The post How To Get More From TDP, Part 3: Sales Productivity appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>Hello, fellow analysts! Ready to add another tool to your kit? This is the third and final part of my series on how to use TDP to enhance your work.

If you are new here, and not yet familiar with TDP, start with this basic introduction or my first two examples on TDP and velocity and TDP and sales trends.

If you’re arguing for more space for your category or segment, showing how share of sales compare to share of shelf can be very powerful. When you prove your products generate more sales than expected, given your shelf presence, it quantifies the value of your products and the opportunity for the retailer.

If you can quantify share of shelf to linear footage from a plan-o-gram (POG) or audit, that’s the gold standard. But we don’t always have access to that type of information. Fortunately, TDP from syndicated data can be a good proxy for shelf space. The proxy works best if:

- All items in the category have the same number of facings (mostly)
- The product moves fast enough that scanned distribution is a good proxy for shelf presence.

I’m going to demonstrate this analysis of product segments by using a fictitious Women’s Hair Removal category. I’m going to apply the analysis at the sub-category (aka segment) level. You could also take the same approach to departments in a store, categories in a department, package types within a segment or any other product group where retailers are making decisions, implicit or explicit, about how to allocate space.

To start our analysis, we first structure our data so we have dollars, units, and TDP for each segment (or brand or size or package type or whatever grouping you want to focus on):

Next, calculate the segment’s share of the category for each of the three measures:

After creating these share measures, calculate a Productivity Index by comparing % of sales to % of distribution. In other words, how productive is each segment at generating sales relative to its distribution?

So, for the Epilator segment:

**Dollar Productivity Index = % Dollars / % TDP = (41 / 26) * 100 = 158**

I first saw the term Productivity Index used to describe this measure in a presentation by Kurt Jetta founder of TABS Analytics so props to him! After you finish this article, check out this great report from TABS with another Productivity Index example plus more good stuff on other topics. TABS Analytics content is always jam packed with innovative approaches to syndicated data – highly recommended!

This might remind you of a Fair Share Index, but here I’m reversing the numerator and denominator (for more on Fair Share, see this post from Robin Simon on calculating your fair share of distribution).

You also might be thinking “But isn’t this “productivity index” kind of like velocity? After all, I’m dividing sales by distribution so that’s a sales rate.” Yes! The Productivity Index really just tells you how segment velocity is relative to the average for the category. But this approach is easy to calculate and convey visually. That’s the beauty of it.

Back to our data, we calculate Dollar Productivity ($ Prod Index) and Unit Productivity (U Prod Index) for each segment:

Epilator is the biggest segment (in my fictional example, not in the real world!) and the most productive. Unit productivity is not as strong but still solid. This is typical of a higher priced product with strong demand—and an excellent result.

A mirror image of the Epilator segment is the Value Disposable Razors segment. This segment does a poor job of driving dollars with an index of 50. But it’s “wowza!” on units with an index of 250. These high volume segments are also valuable because unit sales often drive traffic.

The remaining three categories (i.e. premium disposable razors, reusable razors and tweezers) are all kind of “meh” in terms of performance, with Tweezers being the least impressive. However, it’s also a small category and perhaps not worth spending a lot of time exploring.

Finding ways to boost performance of Premium Disposable Razors might be the highest priority of these three segments, since it’s the second biggest segment and is barely at average on both dimensions.

**Combining Dollar and Unit Indices into One Productivity Measure**

TABS Analytics has a neat way of pulling dollar and unit indices into one measure of productivity: simply multiply them together and call the resulting measure the “Power Index.” This essentially gives equal weight to dollars and units. And it can magnify differences between product groups when both indices are either below or above average. You’ll see the Power Index used in the TABS report I recommended up above – here’s that link again.

When I add in the Power Index metric here it doesn’t change my conclusions but it reinforces them within in a single column (always handy for focused communication!):

A few more analysis notes:

**Why did I use 12 weeks?** I find that 12 or 13 weeks is a good representation of what’s currently in distribution. It’s not too long or short. However, if you have a lot of new products (or many products recently discontinued) and want an early read, four weeks can be a good length of time too. I wouldn’t go longer than 13 weeks, however. Too may changes may occur over longer periods of time.

**Why did I keep the category line at the bottom, when the value is 100% or Index is 100 in every case?** Having this category line helps people understand/remember that all the segment values are shares or averages *relative to the category*.

Happy indexing!

Did you find this article useful? Subscribe to CPG Data Tip Sheet to get future posts delivered to your email in-box. We publish articles about once a month. We will not share your email address with anyone.

The post How To Get More From TDP, Part 3: Sales Productivity appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>In a recent post, I illustrated the most common usage of Total Points of Distribution (TDP) – as a substitute for % ACV in velocity calculations. In this post, I’ll share another one of my favorite ways to leverage the TDP data metric, which is explaining sales trends. If you aren’t familiar with TDP, read […]

The post Enhancing Your Analysis with TDP, Part 2: Explaining Sales Trends appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>In a recent post, I illustrated the most common usage of Total Points of Distribution (TDP) – as a substitute for % ACV in velocity calculations.

In this post, I’ll share another one of my favorite ways to leverage the TDP data metric, which is explaining sales trends. If you aren’t familiar with TDP, read this intro post before continuing with this example.

“Dollars,” “dollar share,” and “dollar trends” are probably the three most common measures examined with syndicated data. This isn’t surprising as they all measure one piece of the bottom line for retailers.

But when we look at these measures, it doesn’t take long for the “why’s” to start flying. Sometimes you won’t find the answers in syndicated data. But sometimes you will. So, where do you look first?

When I want to get a handle on what’s driving sales, I always start by looking at trends in the basic components of dollar sales: units, price, and distribution. This step never hurts and usually helps. And since this type of big-picture analysis is usually done at the brand, product group, segment or category level (where lots of different items are involved), I generally use TDP for the “distribution” measure.

Why do I use TDP instead of % ACV for this? % ACV is usually high and stable for large groups of products or widely available brands. It simply won’t tell us much. This is because only *one *product in a group needs to scan *once* during the period to get credit for % ACV distribution. That’s a low bar at the category level. Therefore, % ACV can mask substantive and dramatic changes in the mix of products distributed within the group.

For example, here’s how this simple analysis might look for segments in the (fictional) flavored milk category:

We can see that Strawberry Milk has grown in both dollars and units. TDPs are up significantly and price is down modestly. So both factors are moving in a direction that should boost sales (i.e. price down, distribution up). However, distribution is likely the stronger influence given the magnitude of the change. Therefore, I would focus my analysis on distribution first and then take a closer look at price.

Now let’s add in more segments:

How do we interpret these numbers? Let’s take a closer look at each segment.

**Strawberry Milk **

Strawberry Milk looks like a big winner. But it may be all distribution driven. And that distribution may or may not be justified given the segment’s sales rate. As one of my clients likes to say “If you stick a bag of dog poop on the shelf at Walmart, it’ll move.” If Strawberry Milk is your segment, you might be tempted to declare victory, but I would suggest taking a closer look so you can anticipate what might happen now that the new distribution is in place.

**Maple Milk**

Maple Milk is way down. But we see distribution has declined dramatically as well. Maybe distribution is down because Maple Milk moved so poorly so retailers discontinued some skus. But maybe it’s down *because* of the distribution losses. This is a common chicken and egg scenario.

Maybe a segment isn’t performing well, so a few items get discontinued. This, in turn, further accelerates the decline and the segment looks worse than ever. Therefore, it’s important to look at how the remaining items are moving. Maybe they’re all very strong players. If so, you want to point that out to the retailer before the distribution driven volume performance further undermines the segment.

**Vanilla Milk **

The Vanilla Milk trends are either price or mix driven. Average $/unit is down significantly, leading to a gain in units. But dollars haven’t grown. To understand what’s going on, you have to dig down to the item level—segment level is just too broad. After all, average $/unit can change even if the shelf price doesn’t move. (For more on this, check out this post and this one from Robin Simon’s series on price.)

The bottom line for Vanilla Milk: start your analysis by investigating price for this segment. Distribution looks pretty stable.

**Chocolate Milk**

Chocolate Milk is the segment where “why’s” are least likely to be answered by syndicated data. Dollars and units are both down despite increased distribution and lower average $/unit. You might find some clues by analyzing lower levels in the product hierarchy (e.g. brand, promoted product group, sub-type). But it’s also possible there’s something going on with consumer demand or a change in marketing. All these trends are counter intuitive and point to a “something else” driver.

**Sales Trend Analysis Summary**

That’s a quick overview of my thought process for the first step in this analysis. To summarize, if I saw big changes in price and stable distribution, I’d focus my analysis on price. And if I saw big changes in distribution and stable price, then I’d focus on distribution.

The next step is to dig further into the hypotheses I formed. It’s easy to get overwhelmed or run down rabbit holes. By formulating some guiding hypotheses first, you can focus your efforts most efficiently.

This approach is also great for illustrating big picture trends in presentations to retailers. If I’ve found that distribution changes are the key to understanding volume gains (or losses), a chart like the one above can be a great way to kick off your presentation and lead to more detailed findings.

For a more advanced lesson on volume drivers, check out this series of posts where Robin walks you through her step by step approach to decomposing volume trends.

The post Enhancing Your Analysis with TDP, Part 2: Explaining Sales Trends appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>TDP (a.k.a. Total Distribution Points) is my favorite syndicated data measure. Why? Because distribution is fundamental to any business question. Therefore, it’s a great place to start when looking for answers. And TDP is the broadest, most flexible way to examine this crucial dimension. If you are looking at individual UPCs, you don’t need TDP – […]

The post Using TDP to Enhance Your Analysis, Part 1: Velocity appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>TDP (a.k.a. Total Distribution Points) is my favorite syndicated data measure. Why? Because distribution is fundamental to any business question. Therefore, it’s a great place to start when looking for answers. And TDP is the broadest, most flexible way to examine this crucial dimension.

If you are looking at individual UPCs, you don’t need TDP – it’s the same as % ACV. But any time you’re working above the UPC level, TDP will help you understand the depth *and* breath of distribution across a group of products. For a group of products, % ACV just addresses breadth (the distribution for at least one product in the group).

In a series of three posts, I’m going to illustrate three ways to use TDP in your analysis: velocity, sales trends and sales productivity. To get the most from these posts, you’ll need to understand the basics of TDP and the concept of velocity.

In this post, I’m going to illustrate my analysis with a fictional yogurt category. Caution: This is fake data! Do not use this in your yogurt business plan! (Actually, it’s real data—but not for yogurt.)

**The Most Common Use of TDP: Velocity Denominator**

The simplest, most common use of TDP is as a component of velocity. Both Nielsen and IRI (regardless of the form in which you get the data or the tool you use) provide two velocity measures:

- Sales per point of distribution
- Sales per million dollars of ACV.

(If I’ve lost you, allow me to again point you to this velocity primer.)

Conceptually, velocity can be expressed as:

**Sales ÷ Distribution**

However, if your group of products has distribution of 100% (or close to it), basic velocity measures become meaningless. Allow me to demonstrate:

In the table above, I’ve indexed the values for each segment to the average segment. Indexed Units, indexed Units per Point of Distribution, and indexed Units per $MM ACV are all identical. The two classic velocity measures (Units Per Point and Units Per Million) add nothing to our understanding of the differences across segments. Because % ACV for the group of products is 100%, these measures of velocity provide no more differentiation than total units.

Now, let’s use TDP in our analysis instead of % ACV:

Units per TDP gives a very different picture of the strength of each segment. “Fruit-on-the-Bottom Yogurt” and “Drinkable Yogurt” generate about the same number of units. But Drinkable Yogurt does it with a fraction of the variety (and presumably shelf space since the two generally correlate).

Technical side note: Some databases will have TDP measures. Some will not. If you don’t have TDP measures, you can calculate them. To get TDP, sum item level % ACV and multiply by 100 (because TDP is not a %). Then sum up units for the same group of products. Then manually calculate Units per TDP.

Now suppose you want to compare across markets. You need to factor in retailer/market ACV. Why? Because Units Per Point of TDP will naturally be higher for a large store than a small store.

So how do you work TDP into your Units Per Million number?

I solve this dilemma by multiplying TDP by total market ACV and then use that as the denominator in my velocity calculation. “Total market ACV” means sales across all products, not just the ones you are analyzing. If you are confused by this concept, learn more about ACV here.

See the table below for an example of this calculation. I’ve run it for two different (pretend) retailers so you can compare the results (shaded in blue).

A couple of notes about this chart:

- You’ll see that my homemade measure doesn’t result in an intuitive number. That’s always a problem with “per million” velocity calculations. In an upcoming post, I’ll give some suggestions for how to make non-intuitive measures more user friendly. But for now, let’s just live with it and look at the relative numbers.
- Some databases will have a measure that is, in some cases, comparable to my homemade measure. In Nielsen Answers it’s called: “Units / $MM ACV / Item”. IRI may have something similar – email me or post a comment if you have the IRI measure name.
*This measure will be comparable to my measure only if % ACV for the group of products is 100% or close to 100%.*Why? I think that’s a whole ‘nother post. For now, take my word for it. - If you need to calculate my measure, you may also need to calculate Total Market ACV. To do that, pick a product or product group with 100% distribution. Then divide Sales by Sales Per Million. For example, using the first line of the Table 1: 1,120,278 / 350 = $3201 MM. I converted this to billions in Table 3, just to make the numbers tidier ($3.2 billion). You get approximately the same market ACV number no matter which product line you use, if it’s the same market and % ACV = 100%. I’ve written a whole post on this calculation if you want to go through it step by step.

So, what does this table tell us about yogurt? Here’s what I see:

**Retailer ACV:**Foodville Markets sells a lot more yogurt than Shop-A-Lot, but Foodville is more than twice the size of Shop-A-Lot, so it had better sell more!**% ACV:**% ACV for the segments is 100% across both retailers (which means they both sell all the segments in all their stores) so we need to go to TDP to understand more about relative segment performance.**% TDP:**TDPs are distributed across segments fairly evenly at the two retailers. There are some differences but they are subtle (we’ll get back to that in a moment).**Units per (TDP * ACV):**When we incorporate TDPs, we see that yogurt velocity is significantly lower at Foodville ($90 vs. $130 for Shop-A-Lot). The segments with the greatest discrepancies are “Fruit-On-The-Bottom” and “Whipped.”

The Fruit-on-the-Bottom differences deserve more scrutiny because that’s the second biggest segment. Going back to those subtle % TDP differences, we see that this value is lower for Foodville, so the retailer may be missing some crucial variety. Or perhaps the price is off. Or the demographics of Shop-A-Lot are better for this segment. At this point, we don’t know for sure.

And this is how you work the analysis. Basically, you generate a bunch of hypotheses and then dive deeper into the segment to see which ones hold. Ultimately, your objective is to come up with recommendations for the retailer.

We can apply the same pattern of thinking to “Whipped Yogurt,” which is also performing poorly. Even though it’s a much smaller segment, it could be very high priority for the manufacturer or the retailer (maybe it’s the high growth segment or has higher margins).

Another hypothesis to investigate is whether Foodville has too much variety in the yogurt category. Given its lower velocity, the solution might be to pull weaker items out of each segment. Ultimately, the decision will depend on the retailer’s goals and how this category compares to others. My sample data provides no definitive answers because we lack context. But regardless, this example illustrates the power of bringing TDP into the analysis.

You can also apply this type of analysis at the brand level or for promoted product groups. Basically, it’s a useful tool in any situation where you have groups of products and want to get a big-picture view of distribution across the entire group.

How do you use TDP? Share your analysis tips and tricks below.

The post Using TDP to Enhance Your Analysis, Part 1: Velocity appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>This is the sixth and final post in my series on quantifying the impact of business drivers on sales volume. The first post in the series explains more about when you would want to do an analysis like this. A volume decomposition analysis allows you to explain why your volume changed by allocating the total […]

The post Volume Decomposition, Part 6 of 6: Impact of All Other Drivers (aka “Everything Else”) appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>This is the sixth and final post in my series on quantifying the impact of business drivers on sales volume.

The first post in the series explains more about when you would want to do an analysis like this. A volume decomposition analysis allows you to explain *why* your volume changed by allocating the total change in volume to changes in key business drivers. There are 4 posts that explain how to analyze the impact of changes in distribution, pricing, trade promotions and competition. Those are the 4 drivers that are readily available in your IRI/Nielsen database.

__ __This post focuses on how to calculate the portion of volume change due to everything else, besides those 4 drivers and also wraps up the 6-part series. In this example I’ll essentially back into the magnitude of the *total* of all the other drivers not already accounted for. Although it is possible for some of the “all other” drivers to obtain data outside of IRI/Nielsen and estimate an elasticity, I won’t calculate the volume impact of each of the “all other” drivers individually in this post.

As a reminder, volume for Magnificent Muffins increased by +2.5% or 4,556,679 pounds for the year ending in December 2015 vs. the same period in the previous year. (Pounds is the equivalized unit, or EQ, for this category.) So we are trying to explain that change by allocating the appropriate amounts to various business drivers. The 4 posts mentioned above show how to calculate the expected absolute volume impact of each driver and also how much of the +2.5% increase is due to each driver.

The sum of the expected volume impact from all drivers must be equal to the total change in volume. In this case, the sum of the drivers we’ve analyzed is just under -2.8 million pounds:

5,436,887 – 7,538,185 – 66,251 + 538,866 + 505,033 – 113,631 – 1,562,036 = -2,799,317

So, if the total volume change is +4,556,679 and the impact of the drivers we’ve already analyzed then other things that happened had to account for +7,355,996:

And you can calculate the Due-To % Change for All Other Drivers in a similar way. The sum of the Due-To % Chg numbers must be equal to the % Change vs. Year Ago for volume, in this case +2.5%. The sum of the Due-To % Change for the analyzed drivers is -1.5% (= +3.0% – 4.1% – 0.0% + 0.3% + 0.3% – 0.1% – 0.9%).

So our completed volume decomposition looks like this:

OK, so maybe this was not the best example to use since the % change due-to All Other Drivers (sometimes called the “Unexplained”) is bigger than the total % change in volume (+4.0% vs. 2.5%)! But this is real and gives me an opportunity to talk about what might all those other drivers be (or not be). The chart below summarizes many of the other things that cause volume to change but are not accounted for by facts available in your IRI/Nielsen database.

Some of these other drivers are specific to your brand or to competitors and others are more general in nature and affect categories and brands across the store and possibly across the entire economy. Advertising can be more traditional media like TV, radio, print, out-of-home but also includes digital. Social media (Facebook, Instagram, Pinterest, online reviews, etc.) has become very important for many brands. Shelving measures like linear feet, shelf location and facings are typically not available in regular IRI/Nielsen databases. Consumer promotion and many shopper marketing programs also drive volume. These include things like FSIs, instant coupon machines, digital coupons, in-store sampling, etc. Examples of consumer trends are an increased desire for convenience or increased popularity of gluten-free items. Economic variables like disposable income or inflation can often impact spending on entire categories while unusually hot or cold weather can increase or decrease your sales, depending on the seasonality of your product. When there was a change in the legal drinking age in some states, sales of beer and wine adjusted accordingly. An unexpected one-time event (like a black-out, general transportation strike, bad crop year, etc.) can impact many industries.

For Magnificent Muffins, the 4% volume increase from All Other drivers was due to a combination of: more/better advertising, introducing a Facebook page, a very successful sampling program. They also benefitted from a competitor having supply problems for a few months during the year, which I chose not to include in the Competition example for simplicity reasons.

And just to tie this all together visually, here is a waterfall chart which is often used to display the results of a volume decomposition:

The post Volume Decomposition, Part 6 of 6: Impact of All Other Drivers (aka “Everything Else”) appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>This is the fifth in a series of posts on quantifying the impact of business drivers on sales volume. Please review these posts first to get more context: Part 1 – Overview of this very useful analytical technique that helps answer the question Why did our volume change? Part 2 – Impact of Distribution Part […]

The post Volume Decomposition, Part 5 of 6: Impact of Competition appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>This is the fifth in a series of posts on quantifying the impact of business drivers on sales volume. Please review these posts first to get more context:

__Part 1__ – Overview of this very useful analytical technique that helps answer the question *Why did our volume change?*

__Part 2__ – Impact of Distribution

__Part 3__ – Impact of Pricing

__Part 4__ – Impact of Trade

And then the final post in the series, when you finish reading this one: Part 6 – Impact of Everything Else

This post focuses on quantifying the impact of Competition. This business driver is probably the one where judgement comes into play the most, since most people will need to make some educated guesses. IMPORTANT: Of all the business drivers, this is the one that relies more on “art” than “science.” You may want to explicitly note for your audience that the amount of volume change attributed to competition should be used with caution.

In this case study, we are explaining the 2.5% volume change for Magnificent Muffins, shown in the first line of the table below. I’ll walk through how to determine what the values are in the last 2 columns of the table based on a change in volume of Magnificent Muffins’ key competitor. I am showing just one competitor but you may want to include more than one, especially if your brand has a relatively low share in the category. You can see that in this example, the annual volume of Competitor X is about 2/3 the size of Magnificent Muffins volume (128MM vs. 187MM). More on this a little later.

The key concept here is assuming that some portion of the increase in volume for Competitor X came out of Magnificent Muffins. There is usually some interaction between most brands within a category – very few households only ever purchase one brand in category over the course of a whole year. The __elasticity__ for competition is always negative since we would expect our volume to go down when the competitive volume goes up.

Because we are looking backwards and explaining what has already happened, the measure I’ll use for competition is the actual change in volume for Competitor X. That volume change is up +2,587,622 (+2.0%). Here’s the relevant data from the table above:

We see that volume sales are up +2.5% for Magnificent Muffins but also up +2.0% for Competitor X. As mentioned above, the expected direction of the sales impact on Magnificent Muffins for an increase in competitive volume, you would expect a decrease. So now the question is: *How much did the over 2.5 million pound increase in Competitor X volume come out of Magnificent Muffins volume?** *

**Estimating Competition Elasticity**

The best way to know how much of the volume increase for Competitor X came from sales that would have gone to Magnificent Muffins is by conducting a source of volume analysis based on shopper panel data. (See __this post__ for an overview of how panel data is different than scanner or POS data which is what we’re using for the volume decomposition.) Most small- and medium-size companies (and even some large companies) do not have easy access to (or the budget for) a source of volume analysis so here are some things to think about when coming up with your competition elasticity.

- If your brand is more unique and offers something not available from the competition then you are less likely to be impacted by a change in competitive volume. On the other hand, if consumers view all brands as pretty much the same and interchangeable then an increase in competitive volume is more likely to come from your brand and a decrease in competitive volume is more likely to benefit your brand.
- How loyal your buyers are is related to the previous point. If they are very loyal to you then an increase in competitive volume is less likely to come from your brand. If there is a lot of switching between brands than a gain for the competition is more likely to come from your brand (and other brands).
- One of the advantages to being the biggest player is that you tend to be more insulated from the competition. Unfortunately, if you are a small brand your volume can be highly impacted by a larger competitor’s promotions and/or advertising.
- Lastly, think about what is happening to the overall category volume. If a competitor’s volume is up, is that additional volume more likely to be incremental to the category or come from other brands? The term “expandable consumption” means that if there is more of a product in the house, consumers will use more. Two classic examples at opposite ends of the spectrum here are snack foods and laundry detergent. If there snack foods in the house then people are likely to eat more of them and if they are on sale people buy more. Laundry detergent, however, has a pretty fixed usage rate. You won’t use more just because it’s in the house or on sale. (You may buy more when it’s on sale but then you’ll just postpone a future purchase.) If a category lends itself to expandable consumption then more of a competitor’s volume increase may be growing the category and not stealing from other brands.

In the absence of a source of volume analysis (the best way to do this, mentioned above), we will assume that Magnificent Muffins lost its fair share of volume to Competitor X. (Regular readers of the CPG Data Tip Sheet may remember some previous posts on fair share. You’ll see that the term can be used in various ways – __fair share of distribution__, __fair share gap analysis__.)

*Fair Share of Volume*

We need to come up with a reasonable estimate of how much of the 2,587,633 additional Competitor X volume came out of Magnificent Muffins. It’s obviously something between none of it and all of it! If these were the only 2 brands in the category * and* the category volume did not increase over this same period, then all of that increase came out of Magnificent Muffins. Of course that scenario is highly unlikely! Let’s see what the overall competitive environment is for the category and between Magnificent Muffins and Competitor X. Here is the situation:

As you can clearly see in the pie chart above, Magnificent Muffins is the leading brand with a volume share of 43%. Competitor X is the #2 player with a 30% share and then Competitors Y and Z have smaller shares than that. In addition, there are 10 other brands that make up the remaining 12% of the category. Based on this competitive situation, Competitor X is really the only one that is likely to have a significant impact on Magnificent Muffins so that’s why it’s the only one included in the volume decomposition.

If all brands lose their fair share of volume as Competitor X gains volume, then you need to recalculate everybody’s share *taking out* Competitor X. Magnificent Muffins’ fair share of Competitor X’s volume gain is 62%. See the calculation below.

Fair share should be the starting point for your competition assumption. Fair share represents the case where:

a. all the competitor’s increase is coming from other brands and none is growing the category OR all the competitor’s decrease is going to other brands and none is just people buying less of the category

b. consumers of the competitive brand think of all the brands as similar to each other and have no preference between the brands.

Once you calculate fair share (as described below), you should then adjust that based on the factors for low/high competitive impact above.

*Calculating The Impact of Competitor X Volume Change on Volume*

To calculate the impact of Competitor X on volume, follow the numbered steps for that row in the following table:

The first 4 data columns are facts that are available in most IRI/Nielsen databases or can be easily calculated from what is available:

- Year ago = value during the same period a year ago
- Current = value in current year
- Abs Chg v. YA = absolute change vs. year ago = Current – Year Ago
- % chg vs. YA = % change vs. year ago = (Current – Year ago) / Year Ago = Abs Chg v. YA / Year Ago

The calculations in the last two columns need to happen in this order:

- 5. Expected Impact on Volume = absolute change in Competitor X EQ volume * Magnificent Muffins fair share * -1 = 2,519,413 * 0.62 * -1 =
**-1,562,036** - 6. Due-To % Change = Expected Impact / Year Ago EQ Volume for Magnificent Muffins =

-1,562,036 / 182,754,450 =**-0.9%.**The loss of -0.3 weeks of support resulted in a volume loss of over 100,000 LBS. So a +2.0% increase in Competitor X volume resulted in a -0.9% decline in Magnificent Muffins volume.

The final post in this series will address the remaining row in the table – All Other Drivers, and show a good way to visually summarize the volume decomposition.

The post Volume Decomposition, Part 5 of 6: Impact of Competition appeared first on CPG Data Tip Sheet. Copyright © 2014 CPG Data Insights.

]]>