Reporting Services Subscriptions in “Pending” Status

Ran into this issue today. A subscription that had been working failed. Looking at it in the RS portal it had a status of “pending” Not helpful! I tested the report, looked at the datasources, and the subscription, it all looked good. It was only when I stepped through the subscription wizard and hit Finish that I got the error that the owner couldn’t be authenticated. Aha!

I used the script from Tip: Change the Owner of SQL Reporting Services Subscription to change the owner (I used builtin\admins, but it’s up to you), then re-ran the subscription job and all was well.

And just in case I need it again, the script to find the jobs that execute the subscriptions:

C.path AS ReportPath, AS ReportName,
u.username AS SubscriptionOwner
FROM dbo.ReportSchedule RS
JOIN msdb.dbo.sysjobs jobs
ON CONVERT(varchar(36), RS.ScheduleID) =
INNER JOIN dbo.Subscriptions S
ON RS.SubscriptionID = S.SubscriptionID
INNER JOIN dbo.Catalog C
ON s.report_oid = C.itemid
INNER JOIN dbo.users u
ON s.ownerid = u.userid

SSC Editorial: Exit Lines & Writing Editorials

I’m trying to evolve a little in the topics I pick and how I write them to push the boundaries a little more than normal. Editorials should trigger thinking and debates, evoke more than provoke, but not be polarizing. I resist the impulse to measure success by the number of comments because that leads to writing to provoke and that’s not what I want to do. I know I have to work harder on titles. This one, Exit Lines, is way too vague and doesn’t do enough to frame the editorial as they read it. Something I’m still trying to figure out is if the comments/conversation don’t align with the writing, is it bad writing, or did it just get them thinking on a part of it? By comparison the feedback on a question of the day is much easier to parse, a lot of “good question” comments if you do well, or a lot of splatter if you don’t!

Changes to the 2015 PASS Board Elections

PASS just posted Changes to the 2015 PASS Board Elections based on feedback from the 2014 NomCom. I was part of that team, so now that the process is complete I want to add some comments on the process and the results. They are, of course, my opinion only – I don’t speak for PASS or other members of the NomCom.

Before I get into the details, I want to mention it was a terrific team on the NomCom, comprised of Bill Graziano, Rick Bolesta, Erin Stellato, Allen Kinsel, and myself along with Janice Simpson from PASS HQ providing support. We discussed and argued and debated about how the process worked and how we thought it should work. I couldn’t ask for a better group to work with.

I’m going to start with a discussion of the changes, then move into some thoughts about the process.

1. Application. I think these changes were made in time for the 2014 election and they were relatively minor, trying to do a better job of capturing reference information. The application serves three purposes:

  •  It requires an effort to complete. Being a candidate means you have to do more than toss your hat in the ring.
  • It’s part of what the NomCom uses to assess/score candidates
  • It is, minus some privacy data, what is provided to the voters to do their own assessment.

2. Requirements. We worked to clarify and consolidate the requirements an applicant must meet in order to be considered. See para #1 in the blog post from PASS. Most are straight forward, the one I’m uncomfortable with is “Each applicant must work with at least one of the Microsoft data technologies to be considered eligible”. It aligns with the PASS mission statement, but I’m uncomfortable with that as well. The safety net is the voters – if they want to support an Excel user for the Board, then they have the power to do so. I would like to see the addition of a ‘time in service’ requirement, require applicants to have been members of PASS for 2 years at the date of application – it takes at least that long to gain enough knowledge of the organization to be credible.

3. Categories.  We updated the categories that the NomCom uses and shares with the voters. These are never perfect and are meant to get the NomCom (and you) thinking about candidates broadly. We removed education as a category because we didn’t see that it was a good indicator of success as a board member (I’ll come back to that topic in a bit).

4. Changed from Ranking to Rating. When we last revised the process back in 2010 it provided for ranking the candidates. That was useful to show the voters how the NomCom saw the candidates and it was a mechanism in case the slate needed to be narrowed (I’ll come back to that too) by picking the top 3x candiates. I think it worked ok, but did the voters use it? Very hard to tell. We changed to ratings this time because it better illustrates the delta between candidates. Imagine four candidates, 3 that are great and have similar skills and one that is clearly weak. Ranking would list them 1,2,3,4 – you wouldn’t see that #4 was really way, way off from the others. By changing to rating you would see 1,1,1, 4, and that would allow you to see that the NomCom considered the 4th candidate to be very weak. I think that is a good change to make.

It had side affects though. Before we essentially rolled up the aggregate score per NomCom member to get to a final ranking – so we could say this is the “best” candidate. We decided to forego that part and instead present the aggregate scores per category. That gives the voters the chance to see what the NomCom thought of candidates in the 9 different categories (along with the application, discussion on Election HQ, and whatever else candidates might publish).

The key point here is that the NomCom is taking a deep look at candidates and sharing the results with you, but it’s up to you to decide how much weight to give to that, or to any of the individual categories. For example, you might not consider financial experience to be a big factor in your decision.

5. Reduced Scoring. The old process required the NomCom to score each application first, then do a second round as part of oral interviews The reason for that was to support the rule that the slate can contain no more than 3 times the number of seats up for election (typically 3 seats, no more than 9 candidates). The reason for that rule is that trying to get the five person NomCom together for more than 9 calls (and all the time associated with them) is just very hard to do and we want the voters to have a reasonable field to look at (there is a heavy bias towards having more than 1x candidates – we hope to never have the situation where votes essentially don’t matter, the candidates would win regardless).

In practice though we rarely (one time?) have more than 3x candidates, so the admin overhead of scoring and compiling the scores in that ‘first round’ is now required only if there is a need to limit the candidates to meet the 3x ceiling. This reduces the workload without detracting from the work/value of the NomCom.

6. Applicant/Candidate Removal. If you look at the entire process there are three gates:

  • Must be eligible. In eligible candidates are notified privately and nothing is listed on If the applicant believes this was done incorrectly/unfairly they can protest to the NomCom and/or make their protest public. I believe in practice the eligibility rules are such that this is unlikely to be an issue.
  • More than 3x candidates. If this ever happens the NomCom will rate the applications, pick the top 3x, and notify the rest that they didn’t make the slate due to the first round of scoring. My understanding is that this would be done privately, but the applicant could of course make it public.
  • Unanimous Vote of the NomCom. We clarified the language, but the intent has been there for a while. The only way the NomCom can exclude one of the 3x candidates from the proposed slate is by unanimous vote. I believe this would be rarely exercised. In practice if the votes were there the NomCom chair would call the candidate and give them the option to withdraw first to avoid any public embarrassment. The candidate could withdraw, or decide to continue and then the formal vote to remove them would be reflected in the notes when the slate was sent to the Board for a vote.

I suspect you see “privately” and think “lack of transparency”. There’s a fine line we have to walk here. I want candidates to apply, and to be treated fairly. Self assessment of skills is not easy to do. Imagine the potential impact on someone if they run and are then not placed on the slate due to being deemed unsuitable by all five members of the NomCom. Could we change it so that nothing is private? We could. Should we? It would make things easier, but I’m not sure it would be better.

The safety net here for the voters is the community election of a portion of the NomCom.  I’m confident that if the NomCom can must 5 votes for excluding someone from the slate that the person being removed is really not ready.

That’s it for the changes. Now to the process.

The 2014 NomCom was asked to review the process before, during, and after the election and submit recommendations, something that took a lot  more time than a normal NomCom. I think that is a healthy process in general – we should solicit feedback from the NomCom, applicants, candidates, and voters after every election. So the Board gets all this feedback, how should it proceed? I can see three steps (note these are my vision, it’s not documented):

  • It’s a minor change. Typos, reformatting, etc. Reviewed and approved by Board vote without public discussion.
  • Minor change, but with implications. Take the changing of categories we made this time. Removing education is a minor change, but that’s what I think – maybe others disagree. Changes in this category would be posted for member comment for 30 days to give the Board a chance to see other views before making a decision. I think this is where most changes will fall.
  • Major change. Changes so sweeping that a full ERC should be chartered and the member engaged before making recommendations back to the Board.

The problem we had this time is that from what I could see the Board was prepared to just apply the changes we recommended without a Board vote. We can’t do that. The Board is accountable for governance and the only way (in my view) to do that is with a vote. They can rubber stamp changes or discuss them for weeks, but a vote documents the decision. Changes must be made in a way that the voters can see what was changed and why.

Way back in this post I mentioned indicators of success. In theory we’d like to filter for candidates most likely to succeed. The problem is that we don’t know which attributes really matter. Look back over 15 years at who you consider successful – what qualities would you say made that happen? I believe that the candidates most likely to succeed have; life experience (not young!), experience with PASS, are comfortable writing and speaking publicly, and have managed people. My friend Kevin Kline often told me he looked for one of the three W’s; work, wisdom, or wealth. The truth is we have such a small pool of candidates that we can’t apply a better filter. I hope one day we’ll be able to.

This is running really long, so I’m going to close by saying that I think the process we have been using the last few years works and that our changes this year are minimal but useful. I think the role of the NomCom is very important to the health of the org. I have no problem with the process evolving, I just hope it does so slowly, publicly, and with thought to the consequences of each change.

That wraps up my service on the NomCom. I thank all of you that voted for me and hope that you feel that I served you well. I’ll be glad to answer any questions on my post or related to the NomCom process, but if you want official answers then please direct them to Bill Graziano. I won’t be a candidate for the 2015 NomCom. Part of that is being ready for a break from it, and part of it is I want to be free of any conflicts should I decide to participate in the 2016 election.


oPASS June 2015: Using BI Power tools to visualize your SQL Infrastructure with Rodney Landrum

The next oPASS meeting is tomorrow night (June 25, 2015):

DBAs do not always have time to work with some of the amazing visual data tools that are becoming common place now for analysts. In this presentation I will show how in just a few very easy steps you can learn more than the basics of PowerView, PowerPivot and Data Mining using data you are already familiar with as a DBA, your SQL Server installation data. We will look at new and interesting way to load, transform, merge and analyze configuration and performance data for many servers simultaneously. I will also demonstrate how to best utilize that data for reports in Excel, SSRS and Visio to get the most out of automation, standardization and visualization with the new Power tools.

Rodney Landrum has been architecting solutions for SQL Server for over 12 years. He has worked with and written about many SQL Server technologies, including DTS, Integration Services, Analysis Services, and Reporting Services. He has co-authored 4 books on Reporting Services. He is been a regular contributor to SQL Server magazine, and Rodney is also SQL Server MVP

Seeking Full Day Seminars For The Friday Before SQLSaturday Orlando 2015

We’re planning to run one or two (or maybe even three) seminars (aka pre-cons) in conjunction with SQLSaturday Orlando 2015. What are we looking for?

  • An interesting topic. I know that’s vague, so let’s say we want a topic that will be of interest to 15+ people willing to pay to attend.
  • A good abstract. We may ask for tweaks, but it still starts with something we can look at and think, yes, we can sell this

Use this link to submit your abstract.

Our usual arrangement is to guarantee travel expenses (not first class!) and then split the proceeds 50/50 after taking all expenses. Worst case is that you get a paid trip to Orlando. So far we’ve never had a speaker not also make some cash for the effort, but it could happen. We’ll co-market the seminars right along with the main event so it will get plenty of exposure.

We’re open to speakers trying to break into the ranks of paid seminar speakers. Send a good abstract and show us you have some game!

We’ll pick from the ones that are submitted using an entirely arbitrary and subjective process based on our previous experience and what we can gather in feedback from oPASS/MagicPASS attendees.

Adding to that, our philosophy is that we would like to make money on these, but our goal is to bring interesting content/speakers to town.  As long as we think we can break even or better, after that it’s all about serving the local community. Some years it’s a mega topic, some years not. We try to vary it.



Use 100% Fill Factor on Insert Only Tables

SQL Server has the concept of fill factor for leaving extra space in pages to reduce the number of page splits. If you picture library shelves, it’s like leaving space for a couple new books on each shelf. It’s a trade-off though, putting in that space may reduce page splits but it increases the number of pages, which makes any scan of the index more expensive and uses more storage space too. It’s often mentioned as a way to reduce fragmentation which is the result of the page splits.

It’s useful on a table/index that has essentially random inserts (think new customers with the varying last names) or one that has a lot of updates that change the row size.

It’s not a good idea on insert only/never update tables – the typical logging table. All the activity is at the end of the table, all the other pages will never change. It’s wasted space to no gain at all. It’s an easy mistake if the default fill factor is set to less than 100%.





SSC Editorial: Sweat Files and Practice Projects

Published last week on SQLServerCentral, Sweat Files and Practice Projects is about the need/lack of packaged projects that a newbie SQL Server DBA/SSIS/BI person can use to exercise skills. I’m going to try writing a couple later in the year. They don’t need to be big and I think they can be both specific and vague, as long as their is a support forum they can peruse after giving it a try on their own.

Notes from the June 17, 2015 MagicPASS Meeting


  • Kendal has been doing an exam push focused on 70-461. A couple passed so far, a couple more tried and not quite, planning to do a retake.
  • Last night they were running through practice questions as a group. Interesting way to make the group feel like a group, but also a little distracting. I think doing it this way, maybe alternate with months where individuals figure it out, THEN do the discussion. Might also be fun to do teams, perhaps by table.
  • Keep trying to remember to write this, SQLServerCentral is a great source of questions that go beyond exams. I’d like to see Steve bundle 10 at a time into decks that groups could use.
  • We also watched about 15 minutes of video (I think from Udemy) about exam prep that I thought was worthwhile, and I think more could be added to it. Reducing stress is a big part of planning/succeeding, having a plan helps.
  • I only caught part of Gareth’s presentation on extended events, but it was well received by the group and a perfect part 1 to Ola’s part 2.
  • Ola Hallengren was in Florida on vacation and agreed to speak to the group. He presented an extended events solution he’s using to monitor his servers. What I found most interesting was that he has a commercial monitoring solution, but it didn’t always help him diagnose as well as just having the right data in a table. Monitoring tools still aren’t as good as they need to be!
  • Dinner was tacos and stuff from Kendal’s Kitchen
  • No raffle this month, but we did have an attendee that handed out coupons for Staples