<?xml version='1.0' encoding='UTF-8'?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:georss="http://www.georss.org/georss" xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr="http://purl.org/syndication/thread/1.0" version="2.0"><channel><atom:id>tag:blogger.com,1999:blog-2201077719031320942</atom:id><lastBuildDate>Fri, 08 Nov 2024 14:43:04 +0000</lastBuildDate><category>customer experience</category><category>library evaluation</category><category>evaluation report</category><category>experience economy</category><category>museum evaluation</category><category>frameworks</category><category>IMLS</category><category>all our ideas</category><category>analyze interview data</category><category>analyze interview transcripts</category><category>analyze text</category><category>card sort</category><category>customer satisfaction</category><category>customer service</category><category>evaluating staff</category><category>evaluation rubrics</category><category>evaluation techniques</category><category>evaluation tools</category><category>focus groups</category><category>internet access</category><category>interviews</category><category>library evaluation techniques</category><category>library summer reading programs</category><category>measuring outcomes</category><category>net promoter score</category><category>online visitor surveys</category><category>public libraries</category><category>qualitative data analysis</category><category>research tools</category><category>rubrics</category><category>visitor feedback</category><title>Integrated Evaluation</title><description></description><link>http://integratedevaluation.blogspot.com/</link><managingEditor>noreply@blogger.com (Erin Gong)</managingEditor><generator>Blogger</generator><openSearch:totalResults>11</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-60967522987591192</guid><pubDate>Mon, 23 Apr 2012 15:06:00 +0000</pubDate><atom:updated>2012-04-23T08:06:08.235-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">evaluating staff</category><category domain="http://www.blogger.com/atom/ns#">evaluation rubrics</category><category domain="http://www.blogger.com/atom/ns#">measuring outcomes</category><category domain="http://www.blogger.com/atom/ns#">public libraries</category><category domain="http://www.blogger.com/atom/ns#">rubrics</category><title>Evaluation Rubrics for Measuring Staff Skills and Behaviors</title><description>I&#39;ve had to create several rubrics for managers to use to measure a set of staff skills and behaviors.&lt;br /&gt;
&lt;br /&gt;
Here&#39;s the set up. A library I am working with has defined several outcomes they want to achieve for staff - staff will do such and such behavior related to excellent customer service or staff will know how to do such and such a task related to technology skills. We&#39;ve decided to measure these outcomes using rubrics that managers will fill out for each staff member (think how the SAT grades writing on the 1 to 6 scale). Because the library is trying to develop new skills and behaviors for staff, the purpose of the rubrics is not to be a performance review for staff to punish or reward them, but instead to identify areas where further training or focus is needed.&lt;br /&gt;
&lt;br /&gt;
Everyone at the library is busy and has too much on their plate as it is. So we wanted to triangulate between a measurement that is quick, painless, and accurate. Rubrics seemed like an interesting way to go. 
&lt;br /&gt;
&lt;br /&gt;
I did some investigating and started following the standard format that rubrics take. (btw, I found this website to be a great resource for baseline info on rubrics: &lt;a href=&quot;http://www.carla.umn.edu/assessment/vac/evaluation/p_7.html&quot;&gt;http://www.carla.umn.edu/assessment/vac/evaluation/p_7.html&lt;/a&gt;.) Essentially, you end up with a scoring grid with 2 axis. On the vertical access you have the different categories that behavior is measured against. If the overall outcome has to do with customer service, then the categories might be &quot;attitude, accessibility, accuracy&quot; (brief nod to the alliteration). On the horizontal access, you have the scoring levels. There are often 4 or 6 levels (even-numbers to avoid the tendency to put everyone in the center), for example &quot;exemplary, &quot;superior, very good, fair, needs work&quot;.
&lt;br /&gt;
&lt;br /&gt;
Looks something like  this:
&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW0dZR6bl-JaWhAyRfuIzAvQ7ooh2YQcdkXEOurPLp9VI8FQU6Ut1kP34mgUwDl79yiVSkcvOxlLQBZo9NR8GihuLURz7QWzaScTYvU-dnyr5vtktTQlA5ffYXaY-crkAglm4pbAqNpcRq/s1600/download.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;212&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW0dZR6bl-JaWhAyRfuIzAvQ7ooh2YQcdkXEOurPLp9VI8FQU6Ut1kP34mgUwDl79yiVSkcvOxlLQBZo9NR8GihuLURz7QWzaScTYvU-dnyr5vtktTQlA5ffYXaY-crkAglm4pbAqNpcRq/s320/download.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
The final step is to fill in each grid in your table with a description of what the outcome would look like, for each category and each level. The problem is, this leaves you with a pretty dense table of text. If you have three categories and four levels, that&#39;s 12 paragraphs of text that someone has to read through and take a measurement on.
&lt;br /&gt;
&lt;br /&gt;
Now looks something like this:
&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJz90CxHXSosklHYgV0Uo0kPW4Q5n9TwpHYxwSUdhSFLQjV5teUeXBuLHgdkXl2s22JouGPrCUbqO3v6K2RFPfA8YLpv0Wiw0reuQQb8DZ_KzBGHr5Z2h0CwF_PoTZ6Z7kV-710MKTCoBu/s1600/download2&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;192&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJz90CxHXSosklHYgV0Uo0kPW4Q5n9TwpHYxwSUdhSFLQjV5teUeXBuLHgdkXl2s22JouGPrCUbqO3v6K2RFPfA8YLpv0Wiw0reuQQb8DZ_KzBGHr5Z2h0CwF_PoTZ6Z7kV-710MKTCoBu/s320/download2&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
Not the quick and painless solution we were looking for.
&lt;br /&gt;
&lt;br /&gt;
After wrestling with this for a while, here&#39;s the solution I came up with. Instead of describing each level for each category, I preface the table with a general description of each level for performance. Then the table describes the ideal performance for each category.
&lt;br /&gt;
&lt;br /&gt;

So managers start off reading something like this: 
&lt;br /&gt;
&lt;br /&gt;

4 – Exemplary. Matches the Ideal perfectly. You would describe every characteristic with words like “always, all, no errors, comprehensive”. 
&lt;br /&gt;
3 – Excellent. A pretty close match to the Ideal, but you can think of a few exceptions. You would describe some characteristics with words like “usually, almost all, very few errors, broad”, even if other characteristics are at a 4 level.
&lt;br /&gt;
2 – Acceptable. Matches the Ideal in many respects, but there are definitely areas for improvement. You would describe some characteristics with words like “often, many, few errors, somewhat limited”, even if other characteristics are at a 4 or 3 level.
&lt;br /&gt;
1 – Not there yet. Some matches with the Ideal, but many areas where improvement is needed. You would describe some characteristics with words like “sometimes, some, some errors, limited”.
&lt;br /&gt;
&lt;br /&gt;

Then they look at the table of idealized characteristics, and jot down their ranking, which looks something like this:
&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq8LaKykU_yNOjM5SL4jnEWenCT3JwZsIdGC_ilqpZRGQ9oBEAlPG31FTt7NsITkF53EN8gew5j3rg65zzATCPmThjCFdl63tvxU1ZIIoeTKUwTiaFtRn0BNyJqaNIxZ0_9gZCafxvb3Yq/s1600/download3&quot; imageanchor=&quot;1&quot; style=&quot;margin-left:1em; margin-right:1em&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;180&quot; width=&quot;320&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq8LaKykU_yNOjM5SL4jnEWenCT3JwZsIdGC_ilqpZRGQ9oBEAlPG31FTt7NsITkF53EN8gew5j3rg65zzATCPmThjCFdl63tvxU1ZIIoeTKUwTiaFtRn0BNyJqaNIxZ0_9gZCafxvb3Yq/s320/download3&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;

This nice thing about this is that once they read through that initial description of performance levels, they can fill out any number of rubrics for various outcomes and know exactly what the scoring criteria are, without having to read something new each time. Triangulation of quick, painless, and accurate. Check!
&lt;br /&gt;
&lt;br /&gt;
Note: drawings done using &lt;a href=&quot;http://dabbleboard.com&quot;&gt;http://dabbleboard.com&lt;/a&gt;</description><link>http://integratedevaluation.blogspot.com/2012/04/evaluation-rubrics-for-measuring-staff.html</link><author>noreply@blogger.com (Erin Gong)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW0dZR6bl-JaWhAyRfuIzAvQ7ooh2YQcdkXEOurPLp9VI8FQU6Ut1kP34mgUwDl79yiVSkcvOxlLQBZo9NR8GihuLURz7QWzaScTYvU-dnyr5vtktTQlA5ffYXaY-crkAglm4pbAqNpcRq/s72-c/download.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-4795281966056885050</guid><pubDate>Wed, 18 Apr 2012 18:52:00 +0000</pubDate><atom:updated>2012-04-18T12:02:47.145-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analyze interview data</category><category domain="http://www.blogger.com/atom/ns#">analyze interview transcripts</category><category domain="http://www.blogger.com/atom/ns#">analyze text</category><category domain="http://www.blogger.com/atom/ns#">qualitative data analysis</category><title>How to analyze interview data</title><description>Every time I conduct structured or semi-structured interviews as part of an evaluation, I feel a bit overwhelmed when I open the folder on my computer with the interview transcripts. How do I take all of this text and turn it into something meaningful?&lt;br /&gt;&lt;br /&gt;I&#39;m still working out different techniques, but I&#39;ve been surprised how much I end up getting out of going through the following simple routine.&lt;br /&gt;&lt;br /&gt;Step 1. I open a new word document and write out main questions that the interview was meant to answer. If I was looking for anything specific (e.g. a story about a frustrating visitor experience or an idea for how the library could be more user-friendly for seniors) then I&#39;ll also write that down. &lt;br /&gt;&lt;br /&gt;These become my main section headings for analyzing the data.&lt;br /&gt;&lt;br /&gt;Step 2. I read through each interview and copy/paste sections of the interview under the appropriate section heading. I try to do this thoughtfully but not agonizingly. I usually set a timer to keep myself from getting bogged down in hyper-interpretation. I often set the original interview text in italics once it&#39;s been copy/pasted once. That way I can skip sections that I don&#39;t know what to do with and come back to them later.&lt;br /&gt;&lt;br /&gt;Step 3. I read through my categorized document and start shifting quotes around, as needed. Sometimes I&#39;ll put a quote in 2 different places, but not often. I&#39;ve found that if I do that too much, I end up with way too many categories and subcategories. Keep it simple.&lt;br /&gt;&lt;br /&gt;Step 4. I take some time away from the analysis. A day or two, if possible.&lt;br /&gt;&lt;br /&gt;Step 5. I go through step 3 again.&lt;br /&gt;&lt;br /&gt;Step 6. I look over my data and ask myself, &quot;so what?&quot; This is where the fun interpretation stage comes in. Once I get to this point, I&#39;ve found that I&#39;m familiar enough with the data to really be able to question my assumptions and be intellectually honest about whether my assessment is founded in fact or in preconception.</description><link>http://integratedevaluation.blogspot.com/2012/04/how-to-analyze-interview-data.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-1092924155426004279</guid><pubDate>Thu, 05 Apr 2012 20:07:00 +0000</pubDate><atom:updated>2012-04-05T14:24:25.010-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">card sort</category><category domain="http://www.blogger.com/atom/ns#">library evaluation</category><category domain="http://www.blogger.com/atom/ns#">library evaluation techniques</category><category domain="http://www.blogger.com/atom/ns#">museum evaluation</category><title>Simple evaluation tool - the card sort</title><description>One of my favorite evaluation tools in the Card Sort. To understand visitors&#39; perspective on something, hand them a stack of cards, each with a 1-2 word description of the topic you have in mind. Ask visitors to pull out the words that best and least describe the topic (limit them to 3-4; many people will want to pull out 8 or 9 cards). Then follow up and ask them why they chose the cards they did.&lt;br /&gt;&lt;br /&gt;When you analyze the data, it&#39;s interested to add them the numbers for what cards were chosen the most, and which cards were ignored the most. Add to that some really great qualitative information about why people made their selections. Often, you find that people have different interpretations of the words than you, as the researcher, had.&lt;br /&gt;&lt;br /&gt;Here&#39;s an example. Let&#39;s say you want to know how people perceive the library&#39;s current collection of fiction. Put together a list of 10-15 words that could possibly describe the collection - and don&#39;t be afraid to include some negative words (good selection, new materials, lots of options, worn out, not relevant to me, etc). The words you choose are important. Keep them simple, so people can process them quickly. But be specific and even a bit daring - that will bring out interesting comments and discussion. Above all, make sure they are relevant to what you want to know about. Test your words with a few people from the organization. Then try the list out on a few patrons before going live with the study.&lt;br /&gt;&lt;br /&gt;As a rule of thumb, when you go live, ask as many people to do the card sort as it takes until you feel like answers are getting redundant. If you must have a number, I would recommend 30 people as a minimum.&lt;br /&gt;&lt;br /&gt;To analyze results, tally up the total of &quot;best&quot; and &quot;worst&quot; hits each word got. Then compare what people said as their reasons why they chose those words. What patterns or themes do you see? One note: the more accurately you transcribe people&#39;s responses to why they chose words, the better your qualitative analysis will be. Resist the temptation to summarize people&#39;s statements when collecting the data. Try to write down what they say as close to word-for-word as possible. It&#39;s tough, but worth it. You don&#39;t want to add your layer of interpretation until all the data is collected.</description><link>http://integratedevaluation.blogspot.com/2012/04/simple-evaluation-tool-card-sort.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-2010921676799238217</guid><pubDate>Wed, 27 Jul 2011 00:12:00 +0000</pubDate><atom:updated>2011-09-02T08:24:19.412-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">evaluation report</category><category domain="http://www.blogger.com/atom/ns#">evaluation techniques</category><category domain="http://www.blogger.com/atom/ns#">focus groups</category><category domain="http://www.blogger.com/atom/ns#">interviews</category><category domain="http://www.blogger.com/atom/ns#">museum evaluation</category><title>Data collection is fun - and not just for you</title><description>Here is a somewhat dated evaluation done at the San Jose Children&#39;s Discovery Museum, but I thought their qualitative methods were interested.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.hfrp.org/out-of-school-time/ost-database-bibliography/database/discovery-youth&quot;&gt;http://www.hfrp.org/out-of-school-time/ost-database-bibliography/database/discovery-youth&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;In the first evaluation, they did focus groups with youth were they divided the youth into teams of 2-3 and had them fill the in the answers to 4 open-ended questions. Then they brought all the teams together (about 4 total) and had them discuss.&lt;br /&gt;&lt;br /&gt;In the second evaluation, they did a group activity with program participants. Participants broke out into groups of 4-5. Then they circulated around the room to large pieces of paper with a different question written on each. Each group wrote answers on the large sheet of paper for each question, so they were able to react to a previous group&#39;s comments.&lt;br /&gt;&lt;br /&gt;These techniques hit on 2 principles that I think are key to focus groups and interviews. First, make it fun. Vary the questions, get people up and moving, use the 5 senses, present extremes with humor. &lt;br /&gt;&lt;br /&gt;Second, give them something to react to. I find I get much more detailed and colorful responses from people if I present something to them first and get their feedback on it. A list, a board full of written-on sticky notes, a photo, a written description. In both of the examples above, participants were responding to what other participants said. A key to that, I believe, is to have people commit to an opinion on paper first. If you jump into reactions right away, then less vocal opinions get lost.</description><link>http://integratedevaluation.blogspot.com/2011/07/httpwwwhfrporgout-of-school-timeost.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-7844525857380630670</guid><pubDate>Wed, 20 Jul 2011 00:19:00 +0000</pubDate><atom:updated>2011-07-19T18:01:40.933-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">customer experience</category><category domain="http://www.blogger.com/atom/ns#">evaluation tools</category><category domain="http://www.blogger.com/atom/ns#">experience economy</category><category domain="http://www.blogger.com/atom/ns#">research tools</category><title>Expanding the Customer Experience - And A Simple Way to Get Started</title><description>Most cultural organizations are focusing in on customer experience as a key to long-term success. Visitors who have a good experience, come again, and again. They bring other people, they talk about you, they may even turn from visitors to donors.&lt;br /&gt;&lt;br /&gt;But when we think about customer experience, we often do so with blinders on. Let&#39;s say you&#39;re a history museum. What&#39;s the customer experience? A family walks in to the museum, pays admission, visits galleries, maybe interacts with some guides or docents, checks out the gift shop on the way out, and leaves.&lt;br /&gt;&lt;br /&gt;With that perspective, how do we improve customer experience? We make sure visitors can easily find the admission desks, we train employees to be friendly and welcoming, we focus on great content in the galleries, we have fun and interesting items in the gift shop.&lt;br /&gt;&lt;br /&gt;There are 2 problems with this kind of brainstorming. #1: It is focused on the customer experience from the perspective of the institution only. #2 (and following from #1): It leaves out the Why.&lt;br /&gt;&lt;br /&gt;#1. Institution-focused perspective&lt;br /&gt;&lt;br /&gt;Think about the experience of going out for ice cream. What does it look like? You pull up to the shop with your family, head inside, look at the many choices, make your selections, pay, eat and enjoy the ice cream, and leave.&lt;br /&gt;&lt;br /&gt;Okay, take the first one. You pull up to the shop. Hang on a sec. What happened before you pulled up? Well, you had to decide which ice cream shop to go to. What about before that? You had to decide to take the whole family out to ice cream. Maybe it was a choice between ice cream or a different treat. Maybe this is a routine thing you do on Wednesday nights. Maybe you&#39;re celebrating something or someone special. &lt;br /&gt;&lt;br /&gt;Now the other side. What happens after you leave? You worry about sticky hands on the car upholstery. You laugh and joke on the way home. Your kids thank you for the outing. You post photos on your blog of the family trip.&lt;br /&gt;&lt;br /&gt;Lesson? There&#39;s a whole lot more to getting ice cream than what happens at the shop.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://integratedevaluation.blogspot.com/2011/07/frameworks-for-experience-economy.html&quot;&gt;As I talked about earlier&lt;/a&gt;, you can look at customer experience as having 5 parts: Entice (when people are thinking they want what you have), Enter (investigating you, calling, looking up online), Engage (decided to come or to purchase), Exit (conclude visit or purchase), Extend (think about it, talk about it, want it, later on). Institutions usually focus on the third, Engage. But your customers&#39; experience has already started and will continue after.&lt;br /&gt;&lt;br /&gt;#2. Getting at Why.&lt;br /&gt;It&#39;s important to recognize this wider range of the customer experience because it will help you think about the why behind your customer engagement. When you think about how people got to you, you start to think about why they came. What situation were they in? What need were they trying to fill? Is it different for different people?&lt;br /&gt;&lt;br /&gt;Asking these questions will help you think about what to change once visitors get to the Engage step. It will also help you think about how you can reach your visitors before they get to that step, and what potential their is for extending beyond the engage/exit.&lt;br /&gt;&lt;br /&gt;Want to incorporate this thinking in your organization?&lt;br /&gt;Here&#39;s a simple strategy to use to get people thinking in these broader terms of the customer experience. On small cards, write down brief sentences describing a typical experience (getting ice cream, buying golf clubs). Include sentences from each of the 5 aspects of the customer experience. Have some of the cards be positive things (found a parking space!) and some be negative things (couldn&#39;t figure out how to call customer service). Shuffle the cards up, and have people sort through the cards to put them in order of what they would do in this experience. Once they do this, draw the analogy to your organization. How would you re-write some of the cards for our organization? Push the card up higher if we&#39;re doing this well. Push it down lower if we&#39;re doing it poorly. What do we need to work on?</description><link>http://integratedevaluation.blogspot.com/2011/07/expanding-customer-experience-and.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-8030402792601495774</guid><pubDate>Wed, 13 Jul 2011 19:48:00 +0000</pubDate><atom:updated>2011-07-13T13:23:33.537-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">customer experience</category><category domain="http://www.blogger.com/atom/ns#">experience economy</category><category domain="http://www.blogger.com/atom/ns#">frameworks</category><title>Experience Economy Frameworks (cont)</title><description>Continuing to document the frameworks I learned &lt;a href=&quot;http://integratedevaluation.blogspot.com/2011/07/frameworks-for-experience-economy.html&quot;&gt;while talking to Kathy Macdonald&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight:bold;&quot;&gt;The Values of An Experience&lt;br /&gt;&lt;/span&gt;People value 4 basic things in an experience: education (what did I learn? how am I different?), entertainment (did it make me laugh? did it feel fun and interesting?), aesthetic (was it comfortable? was it beautiful?), and escape (did I lose track of time? did I feel like I was somewhere else for a little while?).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6X5AQ4dHdfAC-Ya1aavH7BDO6y0k7i0od3jyjD2zjNvQxQKgbTfjdBE7jAFHTtAbpLO7G1Dr4eekBFWVZPFErY2feubTrQUxEwce4K2RJ9ME51xbTWMlvCHN4bmsap3lctlbdDecM2ZeB/s1600/Four+parts+of+experience.jpg&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 250px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6X5AQ4dHdfAC-Ya1aavH7BDO6y0k7i0od3jyjD2zjNvQxQKgbTfjdBE7jAFHTtAbpLO7G1Dr4eekBFWVZPFErY2feubTrQUxEwce4K2RJ9ME51xbTWMlvCHN4bmsap3lctlbdDecM2ZeB/s400/Four+parts+of+experience.jpg&quot; border=&quot;0&quot; alt=&quot;&quot;id=&quot;BLOGGER_PHOTO_ID_5628931196348562914&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight:bold;&quot;&gt;Problem vs Pain&lt;/span&gt;&lt;br /&gt;When an organization brings on a consultant, it&#39;s usually because they&#39;ve identified some problem they want help dealing with. For example, a museum may say that they want to increase visitors coming through the doors on weekends. That&#39;s their problem. But it&#39;s worth digging in to what their pain is. Maybe their pain is that their revenue is steadily decreasing. Or they put a lot of resources into weekend programming and aren&#39;t seeing a return. The pain gets at why the problem is a problem.&lt;br /&gt; &lt;br /&gt;&lt;span style=&quot;font-weight:bold;&quot;&gt;Ways to Change the Experience&lt;/span&gt;&lt;br /&gt;When an organization is looking at changing the customer experience, they have 3 ways to go about it. First, the physical environment: what people see when they come and how it makes them feel. Think colors, textures, furniture, light. Second, the process: how people find their way, what they can (and can&#39;t do). Think signage, walkways, admissions or reference desks. Finally, human: the people they interact with and how they interact with them. Think front desk staff, docents, guides.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight:bold;&quot;&gt;The Change Timeline&lt;/span&gt;&lt;br /&gt;When organizations recognize they have a new vision, strategy, or direction to pursue, they often want to get there right away. Practically speaking, change takes time. This framework says 3 years. The yellow scribble at the end of 3 years is the organization&#39;s vision of where they want to be. It&#39;s a scribble because, while it&#39;s got some definition, it&#39;s still going to be vague - it will shift and grow and become defined in this 3-year process.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEGmUuvua2lCMn4J4PtQOl-RPJq3Bgl2GypFvHRSiSTYPjP_xJMx2legHPYagy9ybD2Apl7_8JSiBf9xmlC04H2NT1ZwLPkHyDvoGl9XTZSTVE_riS7mkUVmtqk_khmwiltLCdxvkYpgwz/s1600/timeline+of+change.jpg&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 250px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEGmUuvua2lCMn4J4PtQOl-RPJq3Bgl2GypFvHRSiSTYPjP_xJMx2legHPYagy9ybD2Apl7_8JSiBf9xmlC04H2NT1ZwLPkHyDvoGl9XTZSTVE_riS7mkUVmtqk_khmwiltLCdxvkYpgwz/s400/timeline+of+change.jpg&quot; border=&quot;0&quot; alt=&quot;&quot;id=&quot;BLOGGER_PHOTO_ID_5628931186822813682&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;At the beginning of the timeline is the Ending. Organizations often start the vision/strategy process by identifying that there are things they want to stop. Things that aren&#39;t working right. Next, they move into Neutral. This is where they are trying new things out (hence the squiggly lines - guess and check, trial and error). They expect some failure, they expect some success. Finally they get to the Beginning. They&#39;ve identified a few things that work. They move forward, full speed ahead.&lt;br /&gt;&lt;br /&gt;The check marks at the bottom are check points for the organization. It&#39;s easy to get discouraged and to feel like things are never going to take shape. By making a list of milestones in advance, then organizations can point those out as signs that they&#39;re on the right track.</description><link>http://integratedevaluation.blogspot.com/2011/07/experience-economy-frameworks-cont.html</link><author>noreply@blogger.com (Erin Gong)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6X5AQ4dHdfAC-Ya1aavH7BDO6y0k7i0od3jyjD2zjNvQxQKgbTfjdBE7jAFHTtAbpLO7G1Dr4eekBFWVZPFErY2feubTrQUxEwce4K2RJ9ME51xbTWMlvCHN4bmsap3lctlbdDecM2ZeB/s72-c/Four+parts+of+experience.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-6113119973129302905</guid><pubDate>Fri, 01 Jul 2011 19:49:00 +0000</pubDate><atom:updated>2011-07-01T13:00:57.799-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">customer experience</category><category domain="http://www.blogger.com/atom/ns#">experience economy</category><category domain="http://www.blogger.com/atom/ns#">frameworks</category><title>Frameworks for an Experience Economy</title><description>I had a fascinating and inspiring discussion yesterday with &lt;a href=&quot;http://www.macdgroup.com/&quot;&gt;Kathy Macdonald&lt;/a&gt;, and I wanted to capture a few of the takeaways from our conversation here.&lt;br /&gt;&lt;br /&gt;Kathy works with businesses to help them be competitive in the Experience Economy. Her work comes from Joseph Pine and James Gilmore&#39;s work in that area, best known by their book &lt;a href=&quot;http://www.amazon.com/Experience-Economy-Theater-Every-Business/dp/0875848192&quot;&gt;The Experience Economy&lt;/a&gt; (an updated version to be released shortly). She is also a master at putting ideas into frameworks that businesses can use for thinking about problems and coming up with solutions - which is something that I absolutely love and strive to do.&lt;br /&gt;&lt;br /&gt;Through the course of our conversation, Kathy shared with me a few interesting frameworks.&lt;br /&gt;&lt;br /&gt;First, the 5 phases of a customer: Entice (or Anticipate), Enter, Engage, Exit, Extend. Most businesses think about their customer relations starting at the Engage phase - a customer walks in the door to buy something. However, for the customer the experience starts way before. Take the analogy of buying a new pair of shoes. The customer experience starts when she realizes her old running shoes are wearing thin or her dress shoes are looking out of date - the customer anticipates the need for shoes (or, from the business side, they entice the customer to consider needing shoes). The customer Enters when she picks up a phone to call a shoe store, or finds them on an online search. She Engages when she drives to the store, walks in the door, looks for and finds shoes. She Exits when she buys the shoes, walks out, drives home, and starts wearing the shoes. Extend comes as she realizes the shoes hold up well (or don&#39;t), fit well (or don&#39;t), or when the store contacts her with a follow up or to give her a coupon or ad.&lt;br /&gt;&lt;br /&gt;Thinking about the whole arc of the customer experience really changes the way a business sees their role and their contact points with a customer. It also gives them a sense of what background (or baggage) customers may bring with them once they get to the Engage piece, and what potential there is to Extend.&lt;br /&gt;&lt;br /&gt;More frameworks to come later...</description><link>http://integratedevaluation.blogspot.com/2011/07/frameworks-for-experience-economy.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-7095425762109287465</guid><pubDate>Fri, 17 Jun 2011 23:04:00 +0000</pubDate><atom:updated>2011-08-09T08:48:17.144-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">customer experience</category><category domain="http://www.blogger.com/atom/ns#">customer satisfaction</category><category domain="http://www.blogger.com/atom/ns#">customer service</category><category domain="http://www.blogger.com/atom/ns#">net promoter score</category><title>Simple tool - Net Promoter Score</title><description>Here&#39;s a tool that&#39;s been around for a while in the business world: Net Promoter Scores. NPS measures loyalty to a business or organization. It&#39;s a simple measure. Just ask,&lt;br /&gt;&lt;br /&gt;&quot;How likely is it that you would recommend this company to a friend or colleague?&quot;&lt;br /&gt;&lt;br /&gt;with an answer scale from 0 (low) to 10 (high).&lt;br /&gt;&lt;br /&gt;What I find most interesting about NPS is how to interpret answers.&lt;br /&gt;&lt;br /&gt;9-10 = loyal customers (promoters)&lt;br /&gt;7-8 = neutral (generally satisfied, not in love)&lt;br /&gt;6 and under = detractors (send negative messages)&lt;br /&gt;&lt;br /&gt;To calculate an actual number for your overall NPS, you take Promoters/All Responses minus Detractors/All Respondents.&lt;br /&gt;&lt;br /&gt;See an interesting discussion of application of NPS to Zingerman&#39;s: &lt;a href=&quot;http://www.zingtrain.com/articles/zxi-a-new-way-to-measure-service/&quot;&gt;http://www.zingtrain.com/articles/zxi-a-new-way-to-measure-service/&lt;/a&gt;</description><link>http://integratedevaluation.blogspot.com/2011/06/simple-tool-net-promoter-score.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-3983007191755051342</guid><pubDate>Tue, 14 Jun 2011 14:42:00 +0000</pubDate><atom:updated>2011-06-14T18:19:21.368-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">evaluation report</category><category domain="http://www.blogger.com/atom/ns#">IMLS</category><category domain="http://www.blogger.com/atom/ns#">internet access</category><category domain="http://www.blogger.com/atom/ns#">library evaluation</category><title>US Impact Study - National Research on the Benefits from Internet Access at Public Libraries</title><description>The University of Washington Information School &lt;a href=&quot;http://cis.washington.edu/usimpact/us-public-library-study.html&quot;&gt;recently released findings from the US IMPACT Public Library Study&lt;/a&gt;. This was a large-scale national study, funded by IMLS and the Bill &amp; Melinda Gates Foundation, looking at internet use at public libraries.&lt;br /&gt;&lt;br /&gt;The data was gathered from an impressive scale of telephone surveys, online surveys via public library computers and case studies/interviews at a few libraries.&lt;br /&gt;&lt;br /&gt;The report lays out broad use statistics and demographics, but combines this with insightful and detailed analysis. It&#39;s a thought-provoking read (though time-consuming, unless you opt for the executive summary).&lt;br /&gt;&lt;br /&gt;I&#39;m a little wary of some of their findings since part of their data comes from people who are already library computer users (the online survey). Evaluation-speak, this is selecting on the dependent variable, which could introduce bias into the results. They did have a substantial sample from the random phone surveys and it looks like they did some creative mathematical weighting to combine the phone and web samples to reduce bias as much as possible. But it&#39;s something to keep in mind.&lt;br /&gt;&lt;br /&gt;A few findings I found particularly interesting.&lt;br /&gt;&lt;br /&gt;Almost 1/3 of Americans used their public library for internet access.&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhN0DnQb2Z_qOswR9Ibo6qb46dRLew2cOmw6pO0RphTO2KN6mUJXwofGzZA8n0Gs2C4j1qJKNpT7EcYCCxzek34AnA0xVplMc1BSGuHh4vBE0dQ3W37eePxp-ezGKe4lCCAAfjxb6zn5xJu/s1600/Screenshot.png&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 336px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhN0DnQb2Z_qOswR9Ibo6qb46dRLew2cOmw6pO0RphTO2KN6mUJXwofGzZA8n0Gs2C4j1qJKNpT7EcYCCxzek34AnA0xVplMc1BSGuHh4vBE0dQ3W37eePxp-ezGKe4lCCAAfjxb6zn5xJu/s400/Screenshot.png&quot; border=&quot;0&quot; alt=&quot;&quot;id=&quot;BLOGGER_PHOTO_ID_5618248557691255410&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;While there is higher use of library internet connections among people in households living below the poverty line (44%, higher for young adults and seniors), &quot;people of all ages, incomes, races, and levels of education go to the library for Internet access, whether they have a connection at home or not&quot;. Libraries are still great agents of democracy - computer access is for everyone, used by everyone.&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2PfivkmE0s6TOMWQuMmvebSrSDNm8xXdvrRf96I71Cklq3rqA4PHPr0DmorPJjxdjbmBU_VWc8xI9Re43w4ZqMdltHmHl5HIH_YEPbMkuDEk9H5Uwlmzu44oRPlyTuvdcR3z9HLJdIFnu/s1600/Screenshot-2.png&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 271px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2PfivkmE0s6TOMWQuMmvebSrSDNm8xXdvrRf96I71Cklq3rqA4PHPr0DmorPJjxdjbmBU_VWc8xI9Re43w4ZqMdltHmHl5HIH_YEPbMkuDEk9H5Uwlmzu44oRPlyTuvdcR3z9HLJdIFnu/s400/Screenshot-2.png&quot; border=&quot;0&quot; alt=&quot;&quot;id=&quot;BLOGGER_PHOTO_ID_5618248551965803650&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Internet access = young adult access. Young adults (14-18 year olds) are high library computer users. What a great initial step for libraries to involve this traditionally hard-to-reach group.&lt;br /&gt;&lt;br /&gt;Patrons rely on library computers to take care of the everyday routine tasks as well as to take life-changes steps.&lt;br /&gt;&lt;br /&gt;Library computer access differs from other options for computers and wireless (cafes, etc) because it is truly free (no feeling obligated to buy a drink first), it offers a quiet space for work, and it comes with staff to help navigate all ranges of computer and technology issues.&lt;br /&gt;&lt;br /&gt;Library internet users can be segmented into 3 groups. Power users, who use the library as their sole access point and come almost daily. Supplemental users, who use the library internet routinely but have other internet options. Occasional users who use the library interest in an emergency, during a time of transition, or to do the quick occasional task.&lt;br /&gt;&lt;br /&gt;Low income patrons are less likely to use library internet access overall, but if they do use it, they are more likely to be very frequent users. Same for 19-24 year olds. Youth 14-18 are the most likely user group of library internet and are also very frequent users.</description><link>http://integratedevaluation.blogspot.com/2011/06/us-impact-study-national-research-on.html</link><author>noreply@blogger.com (Erin Gong)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhN0DnQb2Z_qOswR9Ibo6qb46dRLew2cOmw6pO0RphTO2KN6mUJXwofGzZA8n0Gs2C4j1qJKNpT7EcYCCxzek34AnA0xVplMc1BSGuHh4vBE0dQ3W37eePxp-ezGKe4lCCAAfjxb6zn5xJu/s72-c/Screenshot.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-7261060364374191148</guid><pubDate>Fri, 03 Jun 2011 00:11:00 +0000</pubDate><atom:updated>2011-06-02T17:40:20.525-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">all our ideas</category><category domain="http://www.blogger.com/atom/ns#">library evaluation</category><category domain="http://www.blogger.com/atom/ns#">museum evaluation</category><category domain="http://www.blogger.com/atom/ns#">online visitor surveys</category><category domain="http://www.blogger.com/atom/ns#">visitor feedback</category><title>New survey resource! All Our Ideas</title><description>Check out this nifty new tool for doing a quick survey to rank items: &lt;a href=&quot;http://www.allourideas.org/&quot;&gt;http://www.allourideas.org/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiN1M8KOnvK6kvsQETAjloDH7eikVXr9GZAcvNhbMyP25Jgj3qd4MQMb_CM-UeqHiJ__3tPPNtA_iRoaXrpV3VL8HJGqpg8UkTB9ZqEcF5w-zPHaDx1OQ9F2wnie08yl33qd6cIbW_hok7/s1600/Screenshot.jpg&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 274px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiN1M8KOnvK6kvsQETAjloDH7eikVXr9GZAcvNhbMyP25Jgj3qd4MQMb_CM-UeqHiJ__3tPPNtA_iRoaXrpV3VL8HJGqpg8UkTB9ZqEcF5w-zPHaDx1OQ9F2wnie08yl33qd6cIbW_hok7/s400/Screenshot.jpg&quot; border=&quot;0&quot; alt=&quot;&quot;id=&quot;BLOGGER_PHOTO_ID_5613786130088901890&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;This is a great resource for institutions like museums and libraries that want to poll visitors on potential new services or programs. Imagine one of those staff meetings where you create a wish list of things you&#39;d like to do for your visitors. This list may look something like:&lt;br /&gt;&lt;br /&gt;- Validate parking&lt;br /&gt;- Provide more food options in vending/snack area&lt;br /&gt;- Have more staff in exhibit areas to give visitors more information about exhibits&lt;br /&gt;- Install more benches, chairs in gallery areas&lt;br /&gt;- Section off part of the library as a &quot;No Shhhh Zone&quot; that people can use for group work&lt;br /&gt;- Offer story time programs on Saturday mornings&lt;br /&gt;&lt;br /&gt;It&#39;s great for staff to generate ideas about how to improve visitor services. With &lt;a href=&quot;http://www.allourideas.org/&quot;&gt;All Our Ideas&lt;/a&gt; you can quickly take it to the next step and ask visitors to respond to the wish list. The site takes your list and gives visitors 2 of the options from the list and they have to vote for one item over the other (or select &quot;I don&#39;t know&quot;). They repeat this process over and over in a matter of seconds or minutes. The data is aggregated to a ranking of options, along with fun visualizations of the data. &lt;br /&gt;&lt;br /&gt;A great feature of the tool is that respondents can also add their own idea. That idea is then put into the list to be voted on by others. What a fun way to bring fresh ideas to the table - and have immediate visitor feedback on them.&lt;br /&gt;&lt;br /&gt;I will add my cautionary, unsolicited advise about asking visitors for ideas. When you ask visitors for ideas, ask about the things &lt;span style=&quot;font-style:italic;&quot;&gt;they&lt;/span&gt; are experts on, not the things &lt;span style=&quot;font-style:italic;&quot;&gt;you&lt;/span&gt; are an expert on.&lt;br /&gt;&lt;br /&gt;For example, if you ask visitors &quot;What kind of programs do you want us to do?&quot;, you inevitably gets answers that are way out of your budget, mission, or capacity. But what do you expect? You&#39;re the experts in program development - you know the profession and the feasibility. On the other hand, if you ask parents/caregivers &quot;What new play items would you like to see in the baby area?&quot;, they can fill you in on the latest trends in baby toys that they talk about every week at playgroup.&lt;br /&gt;&lt;br /&gt;Keep visitors talking about what they know about, and you can translate that into exceptional experiences that meet their needs.</description><link>http://integratedevaluation.blogspot.com/2011/06/new-survey-resource-all-our-ideas.html</link><author>noreply@blogger.com (Erin Gong)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiN1M8KOnvK6kvsQETAjloDH7eikVXr9GZAcvNhbMyP25Jgj3qd4MQMb_CM-UeqHiJ__3tPPNtA_iRoaXrpV3VL8HJGqpg8UkTB9ZqEcF5w-zPHaDx1OQ9F2wnie08yl33qd6cIbW_hok7/s72-c/Screenshot.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-2201077719031320942.post-5530324542851249542</guid><pubDate>Wed, 01 Jun 2011 23:39:00 +0000</pubDate><atom:updated>2011-06-01T18:35:06.976-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">evaluation report</category><category domain="http://www.blogger.com/atom/ns#">library evaluation</category><category domain="http://www.blogger.com/atom/ns#">library summer reading programs</category><title>Report of Impacts of Library Summer Reading Program</title><description>Libraries around the country are poised to start their summer reading programs. I just ran into this evaluation report done June 2010 on impacts of library summer reading programs:&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.dom.edu/academics/gslis/downloads/DOM_IMLS_book_2010_FINAL_web.pdf&quot;&gt;The Dominican Study: Public Library Summer Reading Programs Close the Reading Gap (http://www.dom.edu/academics/gslis/downloads/DOM_IMLS_book_2010_FINAL_web.pdf)&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;The report was done by Susan Roman, Deborah Carran, and Carole Fiore through Dominican University Graduate School of Library &amp; Information Science. It was funded by an IMLS National Leadership Grant.&lt;br /&gt;&lt;br /&gt;The study was a much-needed follow up to 30-yr old seminal research on library reading programs and took on the ambitious scope of being a national study, looking at effects of reading programs in several states across the country. The study sample was all 3rd grade students (going into 4th grade) from 11 schools in large and small communities in urban, rural and suburban areas. Schools had to have 50% or more students receiving free/reduced lunch (standard measure for children living poverty). Researchers collected data using surveys of students, teachers, public librarians and school librarians as well as results from the Scholastic Reading Inventory administered before and after summer reading programs.&lt;br /&gt;&lt;br /&gt;Because researchers were trying to have a broad, national scale for the study, they were unable to create a carefully controlled environment for the research. The design was causal comparative - there was no control group, students opted into the summer reading programs as they chose, and summer reading programs (as well as the experiences of non-participants) varied.&lt;br /&gt;&lt;br /&gt;Results of the research show some great data and insight into the value of summer reading programs as identified by students, parents, teachers and librarians. These groups strongly believe summer reading programs make a difference. Unfortunately, researchers were unable the demonstrate a correlation between participation in summer reading program and increased scores on the Scholastic Reading Inventory (SRI). While students who participated in the program universally scored higher on the pre- and post-tests than students who didn&#39;t participate, there was no evidence that the program caused further increase in these already-high scores.&lt;br /&gt;&lt;br /&gt;This would have been a real Holy Grail for demonstrated impacts in ways that funders so often want to see. In fact, results actually showed an increase in scores on the SRI for students who did not participate in the summer reading program. This is a puzzling result, since we usually take it for granted that reading levels drop over the summer months. As the researchers point out, they don&#39;t know what was going on for non-participants - maybe they were involved in alternative reading programs. Without the ability to control the context more, it&#39;s difficult to interpret these results.&lt;br /&gt;&lt;br /&gt;I wonder if the researchers dug into the effects of the summer reading program, controlling for socio-economic status. For example, if they looked at SRI scores with participation in the program and socio-economic status as independent variables, along with an interaction term. I&#39;m imagining a regression that looks like:&lt;br /&gt;&lt;br /&gt;SRI score = B1(Program Participation) + B2(Socio-economic status) + (Program Participation * Socio-economic status)&lt;br /&gt;&lt;br /&gt;My hypothesis is that perhaps there is a differential effect of summer reading programs - they are nice but not necessary for students from high-income families but they are invaluable resources for students from low-income families. This would certainly support the idea of libraries as democratizing agents in communities.&lt;br /&gt;&lt;br /&gt;In the end, the researchers call for a more focused, controlled study of summer reading programs to drill down to quantifiable impacts. I am intrigued by their works so far and hope that it goes further.</description><link>http://integratedevaluation.blogspot.com/2011/06/report-of-impacts-of-library-summer.html</link><author>noreply@blogger.com (Erin Gong)</author><thr:total>0</thr:total></item></channel></rss>